Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
957
| labels
stringlengths 4
795
| body
stringlengths 1
259k
| index
stringclasses 12
values | text_combine
stringlengths 96
259k
| label
stringclasses 2
values | text
stringlengths 96
252k
| binary_label
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
291,890 | 8,951,096,896 | IssuesEvent | 2019-01-25 12:55:45 | oVirt/ovirt-web-ui | https://api.github.com/repos/oVirt/ovirt-web-ui | closed | Wrong units in disk utilization tab | Flag: Needs QE Flag: Needs UI review Priority: High Scope: 2 Severity: Medium Type: Bug | 1. 1024M -> 1G
When free space is almost 1G it shows 1024M instead
![screenshot from 2018-12-06 18-24-19](https://user-images.githubusercontent.com/29575480/49639528-894e6280-fa0b-11e8-994b-3a5722dc7629.png)
2. null
When the allocated space is so small, there is 'null' instead of 'TiB' in donut
![screenshot from 2018-12-06 18-30-07](https://user-images.githubusercontent.com/29575480/49639552-98351500-fa0b-11e8-9ebc-68b09c50e142.png)
_Originally posted by @leistnerova in https://github.com/oVirt/ovirt-web-ui/pull/855#issuecomment-445181417_ | 1.0 | Wrong units in disk utilization tab - 1. 1024M -> 1G
When free space is almost 1G it shows 1024M instead
![screenshot from 2018-12-06 18-24-19](https://user-images.githubusercontent.com/29575480/49639528-894e6280-fa0b-11e8-994b-3a5722dc7629.png)
2. null
When the allocated space is so small, there is 'null' instead of 'TiB' in donut
![screenshot from 2018-12-06 18-30-07](https://user-images.githubusercontent.com/29575480/49639552-98351500-fa0b-11e8-9ebc-68b09c50e142.png)
_Originally posted by @leistnerova in https://github.com/oVirt/ovirt-web-ui/pull/855#issuecomment-445181417_ | priority | wrong units in disk utilization tab when free space is almost it shows instead null when the allocated space is so small there is null instead of tib in donut originally posted by leistnerova in | 1 |
530,070 | 15,415,225,064 | IssuesEvent | 2021-03-05 02:06:20 | domialex/Sidekick | https://api.github.com/repos/domialex/Sidekick | closed | Missing explicit from data | Priority: Medium Status: Available Type: Bug | Somehow I can't find this attribute in the API: `You have Shocking Conflux for 3 seconds every 8 seconds`
```
Rarity: Rare
Blight Guardian
Hunter Hood
--------
Evasion Rating: 231 (augmented)
--------
Requirements:
Level: 64
Dex: 87
--------
Sockets: G
--------
Item Level: 80
--------
Adds 28 to 51 Fire Damage to Spells
+28 to Evasion Rating
+47 to maximum Life
11% increased Rarity of Items found
+29% to Cold Resistance
You have Shocking Conflux for 3 seconds every 8 seconds
--------
Hunter Item
```
![image](https://user-images.githubusercontent.com/4694217/106218503-652edc00-61a5-11eb-9ef5-27103370a738.png)
| 1.0 | Missing explicit from data - Somehow I can't find this attribute in the API: `You have Shocking Conflux for 3 seconds every 8 seconds`
```
Rarity: Rare
Blight Guardian
Hunter Hood
--------
Evasion Rating: 231 (augmented)
--------
Requirements:
Level: 64
Dex: 87
--------
Sockets: G
--------
Item Level: 80
--------
Adds 28 to 51 Fire Damage to Spells
+28 to Evasion Rating
+47 to maximum Life
11% increased Rarity of Items found
+29% to Cold Resistance
You have Shocking Conflux for 3 seconds every 8 seconds
--------
Hunter Item
```
![image](https://user-images.githubusercontent.com/4694217/106218503-652edc00-61a5-11eb-9ef5-27103370a738.png)
| priority | missing explicit from data somehow i can t find this attribute in the api you have shocking conflux for seconds every seconds rarity rare blight guardian hunter hood evasion rating augmented requirements level dex sockets g item level adds to fire damage to spells to evasion rating to maximum life increased rarity of items found to cold resistance you have shocking conflux for seconds every seconds hunter item | 1 |
284,956 | 8,752,684,787 | IssuesEvent | 2018-12-14 04:32:55 | aowen87/BAR | https://api.github.com/repos/aowen87/BAR | closed | Resolve -ppn issue with internallauncher | bug likelihood medium priority reviewed severity medium | Reference these threads: https://elist.ornl.gov/mailman/htdig/visitdevelopers/2015October/015445.html https://elist.ornl.gov/pipermail/visitusers/2015November/018105.html
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 2458
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: High
Subject: Resolve -ppn issue with internallauncher
Assigned to: Kathleen Biagas
Category: -
Target version: 2.10.2
Author: Kathleen Biagas
Start: 11/13/2015
Due date:
% Done: 0%
Estimated time:
Created: 11/13/2015 01:36 pm
Updated: 03/22/2016 06:38 pm
Likelihood: 3 - Occasional
Severity: 3 - Major Irritation
Found in version: 2.10.0
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
Reference these threads: https://elist.ornl.gov/mailman/htdig/visitdevelopers/2015October/015445.html https://elist.ornl.gov/pipermail/visitusers/2015November/018105.html
Comments:
| 1.0 | Resolve -ppn issue with internallauncher - Reference these threads: https://elist.ornl.gov/mailman/htdig/visitdevelopers/2015October/015445.html https://elist.ornl.gov/pipermail/visitusers/2015November/018105.html
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 2458
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: High
Subject: Resolve -ppn issue with internallauncher
Assigned to: Kathleen Biagas
Category: -
Target version: 2.10.2
Author: Kathleen Biagas
Start: 11/13/2015
Due date:
% Done: 0%
Estimated time:
Created: 11/13/2015 01:36 pm
Updated: 03/22/2016 06:38 pm
Likelihood: 3 - Occasional
Severity: 3 - Major Irritation
Found in version: 2.10.0
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
Reference these threads: https://elist.ornl.gov/mailman/htdig/visitdevelopers/2015October/015445.html https://elist.ornl.gov/pipermail/visitusers/2015November/018105.html
Comments:
| priority | resolve ppn issue with internallauncher reference these threads redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker bug priority high subject resolve ppn issue with internallauncher assigned to kathleen biagas category target version author kathleen biagas start due date done estimated time created pm updated pm likelihood occasional severity major irritation found in version impact expected use os all support group any description reference these threads comments | 1 |
419,792 | 12,228,469,324 | IssuesEvent | 2020-05-03 19:33:33 | x13pixels/remedybg-issues | https://api.github.com/repos/x13pixels/remedybg-issues | closed | char and wchar_t misdisplayed in watch window | Component: Engine Component: Watch Window Priority: 5 (Medium) Type: Bug | I've noticed three separate issues with char/wchar_t types in the watch window. Please let me know if you want me to split this up into separate issues here.
I've been using version 0.3.0.5, compiling with msvc 19.22.27905 (only passing /Zi), and am running windows 8.1.
1. When there is a percentage sign in a `char *`, if it is expanded in the watch window the percentage sign will draw as an empty character literal: `37 ''` instead of `37 '%'`.
2. When specifying a length for `wchar_t` literals in the watch window (`my_wchar_array, 6`), only half the number of characters specified are displayed in the value field (i.e. if `my_wchar_array = L"nice"` the value field displays `{ ... } 0xfoofboof "nic" ` if length is 6). The correct number of characters are still displayed in the expanded view though.
3. When expanding a `wchar_t *` value in the watch window, the string displayed in the value field for the expanded item gets messed up when the focus moves from the watch window to the source window (i.e. if `foo = L"a"` and I expand `a` in the watch window, the expanded entries value changes from `97 'a'` to `97 'a?`) | 1.0 | char and wchar_t misdisplayed in watch window - I've noticed three separate issues with char/wchar_t types in the watch window. Please let me know if you want me to split this up into separate issues here.
I've been using version 0.3.0.5, compiling with msvc 19.22.27905 (only passing /Zi), and am running windows 8.1.
1. When there is a percentage sign in a `char *`, if it is expanded in the watch window the percentage sign will draw as an empty character literal: `37 ''` instead of `37 '%'`.
2. When specifying a length for `wchar_t` literals in the watch window (`my_wchar_array, 6`), only half the number of characters specified are displayed in the value field (i.e. if `my_wchar_array = L"nice"` the value field displays `{ ... } 0xfoofboof "nic" ` if length is 6). The correct number of characters are still displayed in the expanded view though.
3. When expanding a `wchar_t *` value in the watch window, the string displayed in the value field for the expanded item gets messed up when the focus moves from the watch window to the source window (i.e. if `foo = L"a"` and I expand `a` in the watch window, the expanded entries value changes from `97 'a'` to `97 'a?`) | priority | char and wchar t misdisplayed in watch window i ve noticed three separate issues with char wchar t types in the watch window please let me know if you want me to split this up into separate issues here i ve been using version compiling with msvc only passing zi and am running windows when there is a percentage sign in a char if it is expanded in the watch window the percentage sign will draw as an empty character literal instead of when specifying a length for wchar t literals in the watch window my wchar array only half the number of characters specified are displayed in the value field i e if my wchar array l nice the value field displays nic if length is the correct number of characters are still displayed in the expanded view though when expanding a wchar t value in the watch window the string displayed in the value field for the expanded item gets messed up when the focus moves from the watch window to the source window i e if foo l a and i expand a in the watch window the expanded entries value changes from a to a | 1 |
274,326 | 8,559,696,686 | IssuesEvent | 2018-11-08 22:05:52 | OpenSRP/opensrp-server-web | https://api.github.com/repos/OpenSRP/opensrp-server-web | opened | Develop Nifi Template to extract Locations from OpenSRP Server and load into data warehouse | Data Warehouse Priority: Medium | We need to develop a Nifi template to extract locations from OpenSRP server and store them in the data warehouse. | 1.0 | Develop Nifi Template to extract Locations from OpenSRP Server and load into data warehouse - We need to develop a Nifi template to extract locations from OpenSRP server and store them in the data warehouse. | priority | develop nifi template to extract locations from opensrp server and load into data warehouse we need to develop a nifi template to extract locations from opensrp server and store them in the data warehouse | 1 |
77,060 | 3,506,257,098 | IssuesEvent | 2016-01-08 05:01:28 | OregonCore/OregonCore | https://api.github.com/repos/OregonCore/OregonCore | closed | [Bug]Instance Cooldown (BB #121) | migrated Priority: Medium Type: Bug | This issue was migrated from bitbucket.
**Original Reporter:** pdx15
**Original Date:** 19.04.2010 20:12:11 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** invalid
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/121
<hr>
Problem in that after murder, for example Kalecgos, cooldown do not appear. And it is a lot of such cases, it turns out so that every day it is possible farm BT and SWP and HS.
And it is not clear, under what circumstances falls down cooldown, random)) | 1.0 | [Bug]Instance Cooldown (BB #121) - This issue was migrated from bitbucket.
**Original Reporter:** pdx15
**Original Date:** 19.04.2010 20:12:11 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** invalid
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/121
<hr>
Problem in that after murder, for example Kalecgos, cooldown do not appear. And it is a lot of such cases, it turns out so that every day it is possible farm BT and SWP and HS.
And it is not clear, under what circumstances falls down cooldown, random)) | priority | instance cooldown bb this issue was migrated from bitbucket original reporter original date gmt original priority major original type bug original state invalid direct link problem in that after murder for example kalecgos cooldown do not appear and it is a lot of such cases it turns out so that every day it is possible farm bt and swp and hs and it is not clear under what circumstances falls down cooldown random | 1 |
77,189 | 3,506,270,205 | IssuesEvent | 2016-01-08 05:09:38 | OregonCore/OregonCore | https://api.github.com/repos/OregonCore/OregonCore | closed | Poisons dont proc (BB #251) | migrated Priority: Medium Type: Bug | This issue was migrated from bitbucket.
**Original Reporter:**
**Original Date:** 02.08.2010 23:52:29 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** resolved
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/251
<hr>
Poisons dont proc through someones shield, you need to shiv, it doesnt proc on autoattacks | 1.0 | Poisons dont proc (BB #251) - This issue was migrated from bitbucket.
**Original Reporter:**
**Original Date:** 02.08.2010 23:52:29 GMT+0000
**Original Priority:** major
**Original Type:** bug
**Original State:** resolved
**Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/251
<hr>
Poisons dont proc through someones shield, you need to shiv, it doesnt proc on autoattacks | priority | poisons dont proc bb this issue was migrated from bitbucket original reporter original date gmt original priority major original type bug original state resolved direct link poisons dont proc through someones shield you need to shiv it doesnt proc on autoattacks | 1 |
263,837 | 8,302,336,805 | IssuesEvent | 2018-09-21 14:15:36 | ZeusWPI/MOZAIC | https://api.github.com/repos/ZeusWPI/MOZAIC | opened | Proxy server | difficulty:hard networking priority:medium | Create a proxy for MOZAIC connections. Basically, this would require creating some 'forwarding table', to which you'd add (match, destination) pairs.
You could then register your locally-running match with the proxy, so that your games can traverse NATs and whatnot.
| 1.0 | Proxy server - Create a proxy for MOZAIC connections. Basically, this would require creating some 'forwarding table', to which you'd add (match, destination) pairs.
You could then register your locally-running match with the proxy, so that your games can traverse NATs and whatnot.
| priority | proxy server create a proxy for mozaic connections basically this would require creating some forwarding table to which you d add match destination pairs you could then register your locally running match with the proxy so that your games can traverse nats and whatnot | 1 |
721,380 | 24,824,269,511 | IssuesEvent | 2022-10-25 19:09:16 | dnnsoftware/Dnn.Platform | https://api.github.com/repos/dnnsoftware/Dnn.Platform | closed | Cannot delete too many files at once in new Resource Manager | Type: Bug Area: Platform > Admin Modules Effort: Medium Priority: High Status: Ready for Development | ## Description of bug
When attempting to delete many files in the new Resource Manager, the UI does not allow for scrollable content.
## Steps to reproduce
1. Go to 'Site Assets' or `Global Assets`.
2. Upload a lot of files into the 'Images' folder.
3. Select a significant number of the files and click `Delete`.
4. See UI problem in screenshot below.
## Current behavior
The content is too tall for the page and the buttons are no longer viewable.
## Expected behavior
Content should be scrollable within the modal and limited to a percentage of the view height, allowing room for buttons.
## Screenshots
![image](https://user-images.githubusercontent.com/4568451/197083823-aa4eb5cd-afb0-4e83-a9c5-b6b1fac31681.png)
## Error information
n/a
## Additional context
n/a
## Affected version
* [ ] 10.00.00 alpha build
* [x] 09.11.00 release candidate
* [ ] 09.10.02 release candidate
* [ ] 09.10.01 latest supported release
## Affected browser
* [x] Chrome
* [x] Firefox
* [x] Safari
* [x] Internet Explorer 11
* [x] Microsoft Edge (Classic)
* [x] Microsoft Edge Chromium
| 1.0 | Cannot delete too many files at once in new Resource Manager - ## Description of bug
When attempting to delete many files in the new Resource Manager, the UI does not allow for scrollable content.
## Steps to reproduce
1. Go to 'Site Assets' or `Global Assets`.
2. Upload a lot of files into the 'Images' folder.
3. Select a significant number of the files and click `Delete`.
4. See UI problem in screenshot below.
## Current behavior
The content is too tall for the page and the buttons are no longer viewable.
## Expected behavior
Content should be scrollable within the modal and limited to a percentage of the view height, allowing room for buttons.
## Screenshots
![image](https://user-images.githubusercontent.com/4568451/197083823-aa4eb5cd-afb0-4e83-a9c5-b6b1fac31681.png)
## Error information
n/a
## Additional context
n/a
## Affected version
* [ ] 10.00.00 alpha build
* [x] 09.11.00 release candidate
* [ ] 09.10.02 release candidate
* [ ] 09.10.01 latest supported release
## Affected browser
* [x] Chrome
* [x] Firefox
* [x] Safari
* [x] Internet Explorer 11
* [x] Microsoft Edge (Classic)
* [x] Microsoft Edge Chromium
| priority | cannot delete too many files at once in new resource manager description of bug when attempting to delete many files in the new resource manager the ui does not allow for scrollable content steps to reproduce go to site assets or global assets upload a lot of files into the images folder select a significant number of the files and click delete see ui problem in screenshot below current behavior the content is too tall for the page and the buttons are no longer viewable expected behavior content should be scrollable within the modal and limited to a percentage of the view height allowing room for buttons screenshots error information n a additional context n a affected version alpha build release candidate release candidate latest supported release affected browser chrome firefox safari internet explorer microsoft edge classic microsoft edge chromium | 1 |
775,809 | 27,237,947,417 | IssuesEvent | 2023-02-21 17:44:56 | etternagame/etterna | https://api.github.com/repos/etternagame/etterna | opened | Replay playback slightly off by a tiny amount | Type: Bug Priority: Medium | 2 greats in playback, pfc in local and online replay
online playback is correct
https://etternaonline.com/song/view/103571#154228
https://etternaonline.com/score/view/Sdcbf00b32b1f08830b5985c1c70538dae09223e250483
https://www.youtube.com/watch?v=bcoWxVylsQI | 1.0 | Replay playback slightly off by a tiny amount - 2 greats in playback, pfc in local and online replay
online playback is correct
https://etternaonline.com/song/view/103571#154228
https://etternaonline.com/score/view/Sdcbf00b32b1f08830b5985c1c70538dae09223e250483
https://www.youtube.com/watch?v=bcoWxVylsQI | priority | replay playback slightly off by a tiny amount greats in playback pfc in local and online replay online playback is correct | 1 |
397,551 | 11,729,244,941 | IssuesEvent | 2020-03-10 19:00:33 | react-figma/react-figma | https://api.github.com/repos/react-figma/react-figma | closed | horizontalPadding/verticalPadding is not compatible with Yoga | complexity: medium priority: medium topic: figma api topic: yoga type: bug | Let's consider example:
```javascript
const NewComponent = createComponent();
export const App = () => {
return (
<Page name="New page" isCurrent>
<NewComponent.Component>
<View
layoutMode='HORIZONTAL'
horizontalPadding={4}
verticalPadding={1}
style={{
backgroundColor: '#EDF1F5'
}}
>
<Text
characters='testing'
/>
</View>
</NewComponent.Component>
</Page>
);
};
```
Component will have size that equal to text (37x14):
<img width="828" alt="Screenshot 2020-02-27 at 23 06 33" src="https://user-images.githubusercontent.com/1270648/75482419-df634900-59b5-11ea-8381-195006a24225.png">
| 1.0 | horizontalPadding/verticalPadding is not compatible with Yoga - Let's consider example:
```javascript
const NewComponent = createComponent();
export const App = () => {
return (
<Page name="New page" isCurrent>
<NewComponent.Component>
<View
layoutMode='HORIZONTAL'
horizontalPadding={4}
verticalPadding={1}
style={{
backgroundColor: '#EDF1F5'
}}
>
<Text
characters='testing'
/>
</View>
</NewComponent.Component>
</Page>
);
};
```
Component will have size that equal to text (37x14):
<img width="828" alt="Screenshot 2020-02-27 at 23 06 33" src="https://user-images.githubusercontent.com/1270648/75482419-df634900-59b5-11ea-8381-195006a24225.png">
| priority | horizontalpadding verticalpadding is not compatible with yoga let s consider example javascript const newcomponent createcomponent export const app return view layoutmode horizontal horizontalpadding verticalpadding style backgroundcolor text characters testing component will have size that equal to text img width alt screenshot at src | 1 |
706,975 | 24,290,039,976 | IssuesEvent | 2022-09-29 04:35:59 | JasonBock/Rocks | https://api.github.com/repos/JasonBock/Rocks | closed | Use Hash Code to Get Consistent Value From String | enhancement Medium Priority | Using `GetHashCode()` on a `string` seems to yield a different value within every instance of an application or test run. Which is fine - `GetHashCode()` isn't meant to be an object identifier. But I'm using this in `MockProjectedDelegateBuilder`, and it makes testing code that projects these types ... well, hard, to say the least. I'd like to create a hash code for a `string` that will always give me the same answer every single time. It doesn't even have to be a 32-bit number; 64- or 128-bits are fine as well. | 1.0 | Use Hash Code to Get Consistent Value From String - Using `GetHashCode()` on a `string` seems to yield a different value within every instance of an application or test run. Which is fine - `GetHashCode()` isn't meant to be an object identifier. But I'm using this in `MockProjectedDelegateBuilder`, and it makes testing code that projects these types ... well, hard, to say the least. I'd like to create a hash code for a `string` that will always give me the same answer every single time. It doesn't even have to be a 32-bit number; 64- or 128-bits are fine as well. | priority | use hash code to get consistent value from string using gethashcode on a string seems to yield a different value within every instance of an application or test run which is fine gethashcode isn t meant to be an object identifier but i m using this in mockprojecteddelegatebuilder and it makes testing code that projects these types well hard to say the least i d like to create a hash code for a string that will always give me the same answer every single time it doesn t even have to be a bit number or bits are fine as well | 1 |
288,872 | 8,852,541,138 | IssuesEvent | 2019-01-08 18:39:10 | visit-dav/issues-test | https://api.github.com/repos/visit-dav/issues-test | closed | BOV reader coring in avtNekDomainBoundaries for certain settings for DATA_BRICKLETS | bug crash likelihood medium priority reviewed severity high wrong results |
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 294
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Urgent
Subject: BOV reader coring in avtNekDomainBoundaries for certain settings for DATA_BRICKLETS
Assigned to: Mark Miller
Category: -
Target version: 2.0.2
Author: Mark Miller
Start: 07/28/2010
Due date:
% Done: 90%
Estimated time: 0.10 hour
Created: 07/28/2010 10:19 am
Updated: 08/03/2010 07:04 pm
Likelihood: 3 - Occasional
Severity: 4 - Crash / Wrong Results
Found in version: 2.0.0
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
Comments:
There was an indexing loop variable error. Hank and I eye-balled together. Fixed in r12045
| 1.0 | BOV reader coring in avtNekDomainBoundaries for certain settings for DATA_BRICKLETS -
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 294
Status: Resolved
Project: VisIt
Tracker: Bug
Priority: Urgent
Subject: BOV reader coring in avtNekDomainBoundaries for certain settings for DATA_BRICKLETS
Assigned to: Mark Miller
Category: -
Target version: 2.0.2
Author: Mark Miller
Start: 07/28/2010
Due date:
% Done: 90%
Estimated time: 0.10 hour
Created: 07/28/2010 10:19 am
Updated: 08/03/2010 07:04 pm
Likelihood: 3 - Occasional
Severity: 4 - Crash / Wrong Results
Found in version: 2.0.0
Impact:
Expected Use:
OS: All
Support Group: Any
Description:
Comments:
There was an indexing loop variable error. Hank and I eye-balled together. Fixed in r12045
| priority | bov reader coring in avtnekdomainboundaries for certain settings for data bricklets redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker bug priority urgent subject bov reader coring in avtnekdomainboundaries for certain settings for data bricklets assigned to mark miller category target version author mark miller start due date done estimated time hour created am updated pm likelihood occasional severity crash wrong results found in version impact expected use os all support group any description comments there was an indexing loop variable error hank and i eye balled together fixed in | 1 |
244,244 | 7,872,541,801 | IssuesEvent | 2018-06-25 11:36:55 | VulcanForge/pvp-mode | https://api.github.com/repos/VulcanForge/pvp-mode | opened | Players with PvP Mode ON determine use of proximity info | improvement medium priority new feature | Provide a command '/pvp radar' for players with PvP Mode ON to toggle showing (ON) / not showing (OFF) the proximity info of themselves and other players with PvP Mode ON. If a player toggles it ON, his proximity is shown to all players with PvP Mode ON, and (s)he gets the proximity info of all other players with PvP Mode ON when using /pvplist.
This empowers players who prefer PvP, but wish to be hard to find at times, to decide for themselves if they can actively hunt and get actively hunted with assistance of player proximity info. | 1.0 | Players with PvP Mode ON determine use of proximity info - Provide a command '/pvp radar' for players with PvP Mode ON to toggle showing (ON) / not showing (OFF) the proximity info of themselves and other players with PvP Mode ON. If a player toggles it ON, his proximity is shown to all players with PvP Mode ON, and (s)he gets the proximity info of all other players with PvP Mode ON when using /pvplist.
This empowers players who prefer PvP, but wish to be hard to find at times, to decide for themselves if they can actively hunt and get actively hunted with assistance of player proximity info. | priority | players with pvp mode on determine use of proximity info provide a command pvp radar for players with pvp mode on to toggle showing on not showing off the proximity info of themselves and other players with pvp mode on if a player toggles it on his proximity is shown to all players with pvp mode on and s he gets the proximity info of all other players with pvp mode on when using pvplist this empowers players who prefer pvp but wish to be hard to find at times to decide for themselves if they can actively hunt and get actively hunted with assistance of player proximity info | 1 |
792,551 | 27,965,253,380 | IssuesEvent | 2023-03-24 18:54:41 | sunpy/sunpy | https://api.github.com/repos/sunpy/sunpy | closed | Provide easier access to valid TimeSeries source keywords | Feature Request Package Novice Priority Low Effort Medium timeseries Hacktoberfest Good First Issue | <!--
We know asking good questions takes effort, and we appreciate your time.
Thank you.
Please be aware that everyone has to follow our code of conduct:
https://github.com/sunpy/sunpy/blob/master/CODE_OF_CONDUCT.rst
Also that these comments are hidden when you submit this github issue.
Please have a search on our GitHub repository to see if a similar issue has already been posted.
If a similar issue is closed, have a quick look to see if you are satisfied by the resolution.
If not please go ahead and open an issue!
-->
### Description
<!--
Provide a general description of the feature you would like.
If you prefer, you can also suggest a draft design or API.
This way we have a deeper discussion on the feature.
-->
I suggest that it should be shown at the top when a user inputs the command
```python
help(TimeSeries)
```
### Additional context
<!--
Add any other context or screenshots
This part is optional.
Delete this section heading if you do not use it.
-->
| 1.0 | Provide easier access to valid TimeSeries source keywords - <!--
We know asking good questions takes effort, and we appreciate your time.
Thank you.
Please be aware that everyone has to follow our code of conduct:
https://github.com/sunpy/sunpy/blob/master/CODE_OF_CONDUCT.rst
Also that these comments are hidden when you submit this github issue.
Please have a search on our GitHub repository to see if a similar issue has already been posted.
If a similar issue is closed, have a quick look to see if you are satisfied by the resolution.
If not please go ahead and open an issue!
-->
### Description
<!--
Provide a general description of the feature you would like.
If you prefer, you can also suggest a draft design or API.
This way we have a deeper discussion on the feature.
-->
I suggest that it should be shown at the top when a user inputs the command
```python
help(TimeSeries)
```
### Additional context
<!--
Add any other context or screenshots
This part is optional.
Delete this section heading if you do not use it.
-->
| priority | provide easier access to valid timeseries source keywords we know asking good questions takes effort and we appreciate your time thank you please be aware that everyone has to follow our code of conduct also that these comments are hidden when you submit this github issue please have a search on our github repository to see if a similar issue has already been posted if a similar issue is closed have a quick look to see if you are satisfied by the resolution if not please go ahead and open an issue description provide a general description of the feature you would like if you prefer you can also suggest a draft design or api this way we have a deeper discussion on the feature i suggest that it should be shown at the top when a user inputs the command python help timeseries additional context add any other context or screenshots this part is optional delete this section heading if you do not use it | 1 |
394,199 | 11,633,302,498 | IssuesEvent | 2020-02-28 07:54:53 | Repair-DeskPOS/RepairDesk-BUGS-IMPROVEMENTS | https://api.github.com/repos/Repair-DeskPOS/RepairDesk-BUGS-IMPROVEMENTS | closed | Diagnostic Notes | Added to Roadmap Medium Priority enhancement help wanted | When I open a ticket AFTER it has been booked in through manage tickets, there is a box for diagnostic note and a flag. When I enter notes and save the notes disappear. Nothing different if I flag it. | 1.0 | Diagnostic Notes - When I open a ticket AFTER it has been booked in through manage tickets, there is a box for diagnostic note and a flag. When I enter notes and save the notes disappear. Nothing different if I flag it. | priority | diagnostic notes when i open a ticket after it has been booked in through manage tickets there is a box for diagnostic note and a flag when i enter notes and save the notes disappear nothing different if i flag it | 1 |
40,426 | 2,868,919,029 | IssuesEvent | 2015-06-05 21:57:40 | dart-lang/pub | https://api.github.com/repos/dart-lang/pub | closed | Add path: source strategy to pub, pull packages from location on disk | enhancement Fixed Priority-Medium | <a href="https://github.com/sethladd"><img src="https://avatars.githubusercontent.com/u/5479?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [sethladd](https://github.com/sethladd)**
_Originally opened as dart-lang/sdk#3732_
----
Not all packages come from git, some come from a location on disk (like the SDK). Add a path: source strategy to find and use them.
e.g.
dependencies:
unittest:
path: /absolute/path/to/sdk/lib
| 1.0 | Add path: source strategy to pub, pull packages from location on disk - <a href="https://github.com/sethladd"><img src="https://avatars.githubusercontent.com/u/5479?v=3" align="left" width="96" height="96"hspace="10"></img></a> **Issue by [sethladd](https://github.com/sethladd)**
_Originally opened as dart-lang/sdk#3732_
----
Not all packages come from git, some come from a location on disk (like the SDK). Add a path: source strategy to find and use them.
e.g.
dependencies:
unittest:
path: /absolute/path/to/sdk/lib
| priority | add path source strategy to pub pull packages from location on disk issue by originally opened as dart lang sdk not all packages come from git some come from a location on disk like the sdk add a path source strategy to find and use them e g dependencies nbsp nbsp unittest nbsp nbsp nbsp nbsp path absolute path to sdk lib | 1 |
82,330 | 3,605,510,185 | IssuesEvent | 2016-02-04 05:53:45 | bethlakshmi/GBE2 | https://api.github.com/repos/bethlakshmi/GBE2 | opened | MC2 not displaying on calendar | bug Medium Priority | MC2 (this event http://www.burlesque-expo.com/scheduler/details/541) does not appear on http://www.burlesque-expo.com/scheduler/Show/Saturday but MC1 (this event http://www.burlesque-expo.com/scheduler/details/540) does and I can't why they don' both appear. | 1.0 | MC2 not displaying on calendar - MC2 (this event http://www.burlesque-expo.com/scheduler/details/541) does not appear on http://www.burlesque-expo.com/scheduler/Show/Saturday but MC1 (this event http://www.burlesque-expo.com/scheduler/details/540) does and I can't why they don' both appear. | priority | not displaying on calendar this event does not appear on but this event does and i can t why they don both appear | 1 |
636,324 | 20,597,379,560 | IssuesEvent | 2022-03-05 18:20:11 | 619Code/Robot2022 | https://api.github.com/repos/619Code/Robot2022 | closed | Develop autonomous | Medium priority | I know we already can get off the tarmac, but we should add shooting if we have time
| 1.0 | Develop autonomous - I know we already can get off the tarmac, but we should add shooting if we have time
| priority | develop autonomous i know we already can get off the tarmac but we should add shooting if we have time | 1 |
416,239 | 12,141,674,490 | IssuesEvent | 2020-04-23 23:12:14 | opendifferentialprivacy/whitenoise-core | https://api.github.com/repos/opendifferentialprivacy/whitenoise-core | reopened | Namespace issues for python and whitenoise | Effort 2 - Medium :cookie: Effort 3 - Large :cake: Priority 1: High | (This was a known issue and overlooked in the conversion from yarrow->whitenoise)
- [ ] change namespace to `opendp.whitenoise_core`, as in:
- `import opendp.whitenoise_core`
- [ ] In the **setup.cfg file**, the `packages` value should be `opendp.whitenoise_core`
- e.g. the **setup.py** equivalent of:
- ` packages=['opendp.whitenoise_core'],`
- [ ] important. The **opendp** directory should NOT have an `__init__.py` file
- reference: https://packaging.python.org/guides/packaging-namespace-packages/#native-namespace-packages
- This allows the whitenoise-system pypi package to also use the `opendp` namespace
- related ticket opendifferentialprivacy/whitenoise-system#189
---
Need to namespace with **opendp**. Whitenoise is already in use in pypi:
```python
from opendp.whitenoise.sql import PandasReader, PrivateReader
from opendp.whitenoise.metadata import CollectionMetadata
```
source: https://github.com/opendifferentialprivacy/whitenoise-samples/blob/master/data/SQL%20Queries.ipynb
---
Example of how it is currently in the whitenoise-core notebooks and **will conflict** with the current pypi whitenoise package (that package is used for serving static files--nothing to do with our project).
```python
import whitenoise
import whitenoise.components as op
```
source: https://github.com/opendifferentialprivacy/whitenoise-samples/blob/master/analysis/basic_data_analysis.ipynb
---
# secondary checklist
- [ ] python tests
- [ ] sample notebooks (may move this to a separate issue) | 1.0 | Namespace issues for python and whitenoise - (This was a known issue and overlooked in the conversion from yarrow->whitenoise)
- [ ] change namespace to `opendp.whitenoise_core`, as in:
- `import opendp.whitenoise_core`
- [ ] In the **setup.cfg file**, the `packages` value should be `opendp.whitenoise_core`
- e.g. the **setup.py** equivalent of:
- ` packages=['opendp.whitenoise_core'],`
- [ ] important. The **opendp** directory should NOT have an `__init__.py` file
- reference: https://packaging.python.org/guides/packaging-namespace-packages/#native-namespace-packages
- This allows the whitenoise-system pypi package to also use the `opendp` namespace
- related ticket opendifferentialprivacy/whitenoise-system#189
---
Need to namespace with **opendp**. Whitenoise is already in use in pypi:
```python
from opendp.whitenoise.sql import PandasReader, PrivateReader
from opendp.whitenoise.metadata import CollectionMetadata
```
source: https://github.com/opendifferentialprivacy/whitenoise-samples/blob/master/data/SQL%20Queries.ipynb
---
Example of how it is currently in the whitenoise-core notebooks and **will conflict** with the current pypi whitenoise package (that package is used for serving static files--nothing to do with our project).
```python
import whitenoise
import whitenoise.components as op
```
source: https://github.com/opendifferentialprivacy/whitenoise-samples/blob/master/analysis/basic_data_analysis.ipynb
---
# secondary checklist
- [ ] python tests
- [ ] sample notebooks (may move this to a separate issue) | priority | namespace issues for python and whitenoise this was a known issue and overlooked in the conversion from yarrow whitenoise change namespace to opendp whitenoise core as in import opendp whitenoise core in the setup cfg file the packages value should be opendp whitenoise core e g the setup py equivalent of packages important the opendp directory should not have an init py file reference this allows the whitenoise system pypi package to also use the opendp namespace related ticket opendifferentialprivacy whitenoise system need to namespace with opendp whitenoise is already in use in pypi python from opendp whitenoise sql import pandasreader privatereader from opendp whitenoise metadata import collectionmetadata source example of how it is currently in the whitenoise core notebooks and will conflict with the current pypi whitenoise package that package is used for serving static files nothing to do with our project python import whitenoise import whitenoise components as op source secondary checklist python tests sample notebooks may move this to a separate issue | 1 |
674,482 | 23,052,516,843 | IssuesEvent | 2022-07-24 20:59:52 | corsaircraft-dev/issues | https://api.github.com/repos/corsaircraft-dev/issues | opened | Remove AFK messages. | chat medium-priority | As we have discovered, people abuse AFK status messages to spam the chat. Instead, we can
- notify players when they send a private message to an AFK player
- display an `[AFK]` tag next to the name in the tab list | 1.0 | Remove AFK messages. - As we have discovered, people abuse AFK status messages to spam the chat. Instead, we can
- notify players when they send a private message to an AFK player
- display an `[AFK]` tag next to the name in the tab list | priority | remove afk messages as we have discovered people abuse afk status messages to spam the chat instead we can notify players when they send a private message to an afk player display an tag next to the name in the tab list | 1 |
628,275 | 19,981,572,084 | IssuesEvent | 2022-01-30 00:58:37 | fasten-project/fasten | https://api.github.com/repos/fasten-project/fasten | closed | Graph resolver doesn't resolve dependencies transitively | bug Priority: Medium | ## Describe the bug
There is a utility method of Graph Resolver that resolves dependencies. It accepts argument that forces the method to resolve transitive dependencies. It does not in fact retrieves transitive dependencies.
## To Reproduce
Steps to reproduce the behavior:
1. `com.fasterxml.jackson.core:jackson-databind/2.8.11` depends on `junit:junit/4.12` - [CHECK HERE](https://api.fasten-project.eu/api/mvn/packages/com.fasterxml.jackson.core:jackson-databind/2.8.11/resolve/dependencies) ✅
2. `com.fasterxml.jackson.core:jackson-databind/2.8.11` has a dependent `com.lightbend.lagom:lagom-scaladsl-cluster_2.11/1.4.1` - [CHECK HERE](https://api.fasten-project.eu/api/mvn/packages/com.fasterxml.jackson.core:jackson-databind/2.8.11/resolve/dependents) ✅
3. If `com.lightbend.lagom:lagom-scaladsl-cluster_2.11/1.4.1` resolves dependencies with transitive flag, it doesn't have `junit` in results. [CHECK HERE](https://api.fasten-project.eu/api/mvn/packages/com.lightbend.lagom:lagom-scaladsl-cluster_2.11/1.4.1/resolve/dependencies?transitive=true) ❌
## Expected behavior
As transtive flag suggests, it is supposed to force the method to retrieve not only first level dependencies, but transitive dependencies as well. Thus, `com.lightbend.lagom:lagom-scaladsl-cluster_2.11/1.4.1` shall have in dependencies `junit:junit/4.12`, but it doesn't.
## Additional context
Although the justifications are given by API endpoints, which theoretically could mess up something inside, but debugging of Vulnerability Cache Processor plugin that uses the graph resolver directly also shows the same problem, thus API is not the source of the problem.
> In figure you see example of `com.lightbend.lagom:lagom-scaladsl-cluster_2.11/1.4.1` from direct use of graph resolver by Vulnerability Cache Processor plugin. Although we force transitivity in code, we get only 3 dependencies that are 1st level dependencies.
<img width="1184" alt="Screenshot 2021-08-26 at 01 41 58" src="https://user-images.githubusercontent.com/14923964/130878474-40172845-1ad4-4103-b311-1fe7d68cf72d.png"> | 1.0 | Graph resolver doesn't resolve dependencies transitively - ## Describe the bug
There is a utility method of Graph Resolver that resolves dependencies. It accepts argument that forces the method to resolve transitive dependencies. It does not in fact retrieves transitive dependencies.
## To Reproduce
Steps to reproduce the behavior:
1. `com.fasterxml.jackson.core:jackson-databind/2.8.11` depends on `junit:junit/4.12` - [CHECK HERE](https://api.fasten-project.eu/api/mvn/packages/com.fasterxml.jackson.core:jackson-databind/2.8.11/resolve/dependencies) ✅
2. `com.fasterxml.jackson.core:jackson-databind/2.8.11` has a dependent `com.lightbend.lagom:lagom-scaladsl-cluster_2.11/1.4.1` - [CHECK HERE](https://api.fasten-project.eu/api/mvn/packages/com.fasterxml.jackson.core:jackson-databind/2.8.11/resolve/dependents) ✅
3. If `com.lightbend.lagom:lagom-scaladsl-cluster_2.11/1.4.1` resolves dependencies with transitive flag, it doesn't have `junit` in results. [CHECK HERE](https://api.fasten-project.eu/api/mvn/packages/com.lightbend.lagom:lagom-scaladsl-cluster_2.11/1.4.1/resolve/dependencies?transitive=true) ❌
## Expected behavior
As transtive flag suggests, it is supposed to force the method to retrieve not only first level dependencies, but transitive dependencies as well. Thus, `com.lightbend.lagom:lagom-scaladsl-cluster_2.11/1.4.1` shall have in dependencies `junit:junit/4.12`, but it doesn't.
## Additional context
Although the justifications are given by API endpoints, which theoretically could mess up something inside, but debugging of Vulnerability Cache Processor plugin that uses the graph resolver directly also shows the same problem, thus API is not the source of the problem.
> In figure you see example of `com.lightbend.lagom:lagom-scaladsl-cluster_2.11/1.4.1` from direct use of graph resolver by Vulnerability Cache Processor plugin. Although we force transitivity in code, we get only 3 dependencies that are 1st level dependencies.
<img width="1184" alt="Screenshot 2021-08-26 at 01 41 58" src="https://user-images.githubusercontent.com/14923964/130878474-40172845-1ad4-4103-b311-1fe7d68cf72d.png"> | priority | graph resolver doesn t resolve dependencies transitively describe the bug there is a utility method of graph resolver that resolves dependencies it accepts argument that forces the method to resolve transitive dependencies it does not in fact retrieves transitive dependencies to reproduce steps to reproduce the behavior com fasterxml jackson core jackson databind depends on junit junit ✅ com fasterxml jackson core jackson databind has a dependent com lightbend lagom lagom scaladsl cluster ✅ if com lightbend lagom lagom scaladsl cluster resolves dependencies with transitive flag it doesn t have junit in results ❌ expected behavior as transtive flag suggests it is supposed to force the method to retrieve not only first level dependencies but transitive dependencies as well thus com lightbend lagom lagom scaladsl cluster shall have in dependencies junit junit but it doesn t additional context although the justifications are given by api endpoints which theoretically could mess up something inside but debugging of vulnerability cache processor plugin that uses the graph resolver directly also shows the same problem thus api is not the source of the problem in figure you see example of com lightbend lagom lagom scaladsl cluster from direct use of graph resolver by vulnerability cache processor plugin although we force transitivity in code we get only dependencies that are level dependencies img width alt screenshot at src | 1 |
422,503 | 12,279,403,632 | IssuesEvent | 2020-05-08 12:07:50 | getkirby/kirby | https://api.github.com/repos/getkirby/kirby | closed | No error thrown when accessing private and protected members of Kirby\Cms\App in a plugin | priority: medium 🔜 type: bug 🐛 | **Describe the bug**
In a plugin, you can access `kirby()->roots` and no error will be thrown. We had a discussion about that with @distantnative [here](https://forum.getkirby.com/t/workaround-for-a-plugin-setting-before-the-plugin-is-initialized/16846/7?u=hdodov).
**To Reproduce**
Steps to reproduce the behavior:
1. Clone the plainkit
2. Create a plugin
3. Put `var_dump(kirby()->roots)` in the plugin _index.php_ file
4. No error thrown
**Expected behavior**
An error should be thrown.
**Kirby Version**
3.3.2
**Additional context**
If you do `var_dump(kirby()->roots)` in a _template_, you correctly get the error:
>Cannot access protected property Kirby\Cms\App::$roots
However, doing the same thing in a plugin works, and it shouldn't. @distantnative [mentioned](https://forum.getkirby.com/t/workaround-for-a-plugin-setting-before-the-plugin-is-initialized/16846/8) this might be due to the way hooks are triggered:
https://github.com/getkirby/kirby/blob/c1ff870e4f13b647f57f99b3e963340c3e0a286c/src/Cms/App.php#L1292 | 1.0 | No error thrown when accessing private and protected members of Kirby\Cms\App in a plugin - **Describe the bug**
In a plugin, you can access `kirby()->roots` and no error will be thrown. We had a discussion about that with @distantnative [here](https://forum.getkirby.com/t/workaround-for-a-plugin-setting-before-the-plugin-is-initialized/16846/7?u=hdodov).
**To Reproduce**
Steps to reproduce the behavior:
1. Clone the plainkit
2. Create a plugin
3. Put `var_dump(kirby()->roots)` in the plugin _index.php_ file
4. No error thrown
**Expected behavior**
An error should be thrown.
**Kirby Version**
3.3.2
**Additional context**
If you do `var_dump(kirby()->roots)` in a _template_, you correctly get the error:
>Cannot access protected property Kirby\Cms\App::$roots
However, doing the same thing in a plugin works, and it shouldn't. @distantnative [mentioned](https://forum.getkirby.com/t/workaround-for-a-plugin-setting-before-the-plugin-is-initialized/16846/8) this might be due to the way hooks are triggered:
https://github.com/getkirby/kirby/blob/c1ff870e4f13b647f57f99b3e963340c3e0a286c/src/Cms/App.php#L1292 | priority | no error thrown when accessing private and protected members of kirby cms app in a plugin describe the bug in a plugin you can access kirby roots and no error will be thrown we had a discussion about that with distantnative to reproduce steps to reproduce the behavior clone the plainkit create a plugin put var dump kirby roots in the plugin index php file no error thrown expected behavior an error should be thrown kirby version additional context if you do var dump kirby roots in a template you correctly get the error cannot access protected property kirby cms app roots however doing the same thing in a plugin works and it shouldn t distantnative this might be due to the way hooks are triggered | 1 |
275,891 | 8,581,725,740 | IssuesEvent | 2018-11-13 15:22:16 | sunpy/sunpy | https://api.github.com/repos/sunpy/sunpy | closed | Rhessi Fido queries can not span multiple months | Effort Medium Feature Request Hacktoberfest Package Novice Priority Medium net | The code in `instr.rhessi` only returns the database files for the month of the start time. It needs to return a list for every month in the time range. | 1.0 | Rhessi Fido queries can not span multiple months - The code in `instr.rhessi` only returns the database files for the month of the start time. It needs to return a list for every month in the time range. | priority | rhessi fido queries can not span multiple months the code in instr rhessi only returns the database files for the month of the start time it needs to return a list for every month in the time range | 1 |
100,059 | 4,075,954,637 | IssuesEvent | 2016-05-29 15:30:27 | moria0525/MadeinJLM-students | https://api.github.com/repos/moria0525/MadeinJLM-students | closed | Delete or Add 'Career portfolio' to Student | 1 - Ready Points: 3 Priority: Medium sh00ki Think-Smart | ## Feature: Delete or Add 'Career portfolio' to Student
## User Story:
- As a Student
- I want to add/delete my 'Career portfolio' in my profile.
- So that to improve or fix my profile
## Bug Handling:
### Expected behavior
### Actual behavior
### Steps to reproduce the behavior
<!---
@huboard:{"order":24.5,"milestone_order":27,"custom_state":""}
-->
| 1.0 | Delete or Add 'Career portfolio' to Student - ## Feature: Delete or Add 'Career portfolio' to Student
## User Story:
- As a Student
- I want to add/delete my 'Career portfolio' in my profile.
- So that to improve or fix my profile
## Bug Handling:
### Expected behavior
### Actual behavior
### Steps to reproduce the behavior
<!---
@huboard:{"order":24.5,"milestone_order":27,"custom_state":""}
-->
| priority | delete or add career portfolio to student feature delete or add career portfolio to student user story as a student i want to add delete my career portfolio in my profile so that to improve or fix my profile bug handling expected behavior actual behavior steps to reproduce the behavior huboard order milestone order custom state | 1 |
509,833 | 14,750,025,956 | IssuesEvent | 2021-01-08 01:00:37 | nlpsandbox/nlpsandbox | https://api.github.com/repos/nlpsandbox/nlpsandbox | opened | Identify the submission quota | Priority: Medium | In the COVID DREAM Challenge, participants are allowed to submit daily. @yy6linda it would be interesting to interview developers (in particular the ones who are submitting daily) what is their development strategy.
For this second continuous benchmarking experiment, I would prefer to allow a smaller number of submissions. Typically 1 or 2 submissions per week in order to limit overfitting.
I large quota is also at the "disadvantage" of new developers if the dataset is not updated regularly, as a submission that has been evaluated 20 times on the same data may overfit and appear as having a better score than a new submission that may perform less well on the dataset but have better performance reproducibility. It will be a game changer once we can report performance on performance reproducibility once we have more than once dataset.
Current submission quota: 2 submissions / week | 1.0 | Identify the submission quota - In the COVID DREAM Challenge, participants are allowed to submit daily. @yy6linda it would be interesting to interview developers (in particular the ones who are submitting daily) what is their development strategy.
For this second continuous benchmarking experiment, I would prefer to allow a smaller number of submissions. Typically 1 or 2 submissions per week in order to limit overfitting.
I large quota is also at the "disadvantage" of new developers if the dataset is not updated regularly, as a submission that has been evaluated 20 times on the same data may overfit and appear as having a better score than a new submission that may perform less well on the dataset but have better performance reproducibility. It will be a game changer once we can report performance on performance reproducibility once we have more than once dataset.
Current submission quota: 2 submissions / week | priority | identify the submission quota in the covid dream challenge participants are allowed to submit daily it would be interesting to interview developers in particular the ones who are submitting daily what is their development strategy for this second continuous benchmarking experiment i would prefer to allow a smaller number of submissions typically or submissions per week in order to limit overfitting i large quota is also at the disadvantage of new developers if the dataset is not updated regularly as a submission that has been evaluated times on the same data may overfit and appear as having a better score than a new submission that may perform less well on the dataset but have better performance reproducibility it will be a game changer once we can report performance on performance reproducibility once we have more than once dataset current submission quota submissions week | 1 |
435,282 | 12,533,766,577 | IssuesEvent | 2020-06-04 18:11:39 | ngageoint/hootenanny | https://api.github.com/repos/ngageoint/hootenanny | opened | Missing element custom tag not making it through changeset derivation | Category: Core Priority: Medium Status: Defined Type: Bug | Looks like the changes from #4012 weren't quite right. the `hoot:missing_child` tag needs to be excluded from being dropped by `ChangesetDeriver` during cut and replace. We need to add a pass through filter list to allow certain debug tags through regardless. | 1.0 | Missing element custom tag not making it through changeset derivation - Looks like the changes from #4012 weren't quite right. the `hoot:missing_child` tag needs to be excluded from being dropped by `ChangesetDeriver` during cut and replace. We need to add a pass through filter list to allow certain debug tags through regardless. | priority | missing element custom tag not making it through changeset derivation looks like the changes from weren t quite right the hoot missing child tag needs to be excluded from being dropped by changesetderiver during cut and replace we need to add a pass through filter list to allow certain debug tags through regardless | 1 |
830,159 | 31,992,061,712 | IssuesEvent | 2023-09-21 06:44:22 | TriggerReactor/TriggerReactor | https://api.github.com/repos/TriggerReactor/TriggerReactor | closed | Proposals: Numeric Separators | domain:suggestion package:core category:feature-request priority:medium | This feature enables developers to make their numeric literals more readable by creating a visual separation between groups of digits. Large numeric literals are difficult for the human eye to parse quickly, especially when there are long digit repetitions. This impairs both the ability to get the correct value / order of magnitude.
```java
1000000000 // Is this a billion? a hundred millions? Ten millions?
20230711.1349 // What scale is this? what power of 10?
```
Using underscores (`_`, U+005F) as separators helps improve readability for numeric literals, both integers and floating-point (in TriggerReactor, it's all doubles anyway):
```java
1_000_000_000 // Ah, so a billion
20_230_711.1349 // And this is hundreds of millions
```
Also, this works on the fractional parts, too:
```java
0.000_001 // 1 millionth
```
## References
* [tc39/proposal-numeric-separator](https://github.com/tc39/proposal-numeric-separator)
| 1.0 | Proposals: Numeric Separators - This feature enables developers to make their numeric literals more readable by creating a visual separation between groups of digits. Large numeric literals are difficult for the human eye to parse quickly, especially when there are long digit repetitions. This impairs both the ability to get the correct value / order of magnitude.
```java
1000000000 // Is this a billion? a hundred millions? Ten millions?
20230711.1349 // What scale is this? what power of 10?
```
Using underscores (`_`, U+005F) as separators helps improve readability for numeric literals, both integers and floating-point (in TriggerReactor, it's all doubles anyway):
```java
1_000_000_000 // Ah, so a billion
20_230_711.1349 // And this is hundreds of millions
```
Also, this works on the fractional parts, too:
```java
0.000_001 // 1 millionth
```
## References
* [tc39/proposal-numeric-separator](https://github.com/tc39/proposal-numeric-separator)
| priority | proposals numeric separators this feature enables developers to make their numeric literals more readable by creating a visual separation between groups of digits large numeric literals are difficult for the human eye to parse quickly especially when there are long digit repetitions this impairs both the ability to get the correct value order of magnitude java is this a billion a hundred millions ten millions what scale is this what power of using underscores u as separators helps improve readability for numeric literals both integers and floating point in triggerreactor it s all doubles anyway java ah so a billion and this is hundreds of millions also this works on the fractional parts too java millionth references | 1 |
323,551 | 9,856,385,592 | IssuesEvent | 2019-06-19 21:59:08 | Fabian-Sommer/HeroesLounge | https://api.github.com/repos/Fabian-Sommer/HeroesLounge | closed | Enhance Calendar | enhancement medium priority small | In the UpcomingMatches component, when a match has a division, the division title is displayed. This should be the playoff title if the match belongs to a playoff or the division belongs to a playoff | 1.0 | Enhance Calendar - In the UpcomingMatches component, when a match has a division, the division title is displayed. This should be the playoff title if the match belongs to a playoff or the division belongs to a playoff | priority | enhance calendar in the upcomingmatches component when a match has a division the division title is displayed this should be the playoff title if the match belongs to a playoff or the division belongs to a playoff | 1 |
368,784 | 10,884,452,128 | IssuesEvent | 2019-11-18 08:19:00 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | [0.9.0 staging-1206] Add amount of ingredients in tooltip | Medium Priority | **Version:** 0.9.0.0 beta staging-debug
But...
HOW MUCH INGREDIENTS???
![изображение](https://user-images.githubusercontent.com/4980243/67798676-0441c100-fa95-11e9-91ad-98a093f66c3c.png)
| 1.0 | [0.9.0 staging-1206] Add amount of ingredients in tooltip - **Version:** 0.9.0.0 beta staging-debug
But...
HOW MUCH INGREDIENTS???
![изображение](https://user-images.githubusercontent.com/4980243/67798676-0441c100-fa95-11e9-91ad-98a093f66c3c.png)
| priority | add amount of ingredients in tooltip version beta staging debug but how much ingredients | 1 |
644,786 | 20,987,711,724 | IssuesEvent | 2022-03-29 06:07:25 | AY2122S2-CS2103-W17-4/tp | https://api.github.com/repos/AY2122S2-CS2103-W17-4/tp | closed | Add support for Applicants to have a hiredStatus | priority.Medium | User story: As a recruiter, I want to be able to see the current position of an applicant, so that I can easily tell if they have already secured a position or not. | 1.0 | Add support for Applicants to have a hiredStatus - User story: As a recruiter, I want to be able to see the current position of an applicant, so that I can easily tell if they have already secured a position or not. | priority | add support for applicants to have a hiredstatus user story as a recruiter i want to be able to see the current position of an applicant so that i can easily tell if they have already secured a position or not | 1 |
645,485 | 21,006,170,907 | IssuesEvent | 2022-03-29 22:56:10 | dtcenter/MET | https://api.github.com/repos/dtcenter/MET | opened | Enhance TC-Gen to quantify GTWO shapefile misses in space and time. | type: enhancement priority: medium requestor: NOAA/other alert: NEED MORE DEFINITION alert: NEED ACCOUNT KEY alert: NEED PROJECT ASSIGNMENT MET: Tropical Cyclone Tools | ## Describe the Enhancement ##
This feature request came via @halperin-erau based on feedback on his TC-Gen presentation at the Interdepartmental Hurricane Conference in March 2022. The requested changes apply to the verification of GTWO shapefile areas specified using the "-shape" command line option.
Each shapefile has a corresponding probability of genesis value. TC-Gen checks the BEST track data to determine whether genesis occurred within the prescribed area and during the prescribed time window. The result is binary, yes/no.
Richard Pasch at NHC asked if there was a way to verify/determine how far off some genesis forecasts might be in the event that genesis occurs just outside of the GTWO shape file. In other words, could we output something like a minimum distance from the shape file boundary to the nearest BEST genesis location within the 2/5/7-day window? Kate Musgrave at CIRA asked a similar question, but with respect to timing errors. Could we output the time of BEST genesis for events that occur within the GTWO shape file, but outside of the 2/5/7-day window?
Here are my initial thoughts for how this could be handled:
I can see how this functionality they requested could be added. I'd start by updating the logic in the [find_genshape_match()](https://github.com/dtcenter/MET/blob/a32b11b13c994814e9e8e0d2ab116ddc702e6c64/met/src/tools/tc_utils/tc_gen/tc_gen.cc#L1136) function.
To keep track of near misses, we need to...
Replace this call to [p.is_inside(x, y)](https://github.com/dtcenter/MET/blob/a32b11b13c994814e9e8e0d2ab116ddc702e6c64/met/src/tools/tc_utils/tc_gen/tc_gen.cc#L1159) to a function that would compute the minimum distance to the boundary of the polyline. That logic does already exist elsewhere in MET to find the minimum distance from a point to a series of line segments.
Define logic as to whether it's better to be closer in space or time... in what order should those be minimized?
Update the [write_pct_genmpr_row()](https://github.com/dtcenter/MET/blob/a32b11b13c994814e9e8e0d2ab116ddc702e6c64/met/src/tools/tc_utils/tc_gen/tc_gen.cc#L2236) function and output line type as needed to indicate the closest match.
### Time Estimate ###
3 days.
### Sub-Issues ###
Consider breaking the enhancement down into sub-issues.
No sub-issues needed.
### Relevant Deadlines ###
*List relevant project deadlines here or state NONE.*
### Funding Source ###
*Define the source of funding and account keys here or state NONE.*
## Define the Metadata ##
### Assignee ###
- [ ] Select **engineer(s)** or **no engineer** required
- [ ] Select **scientist(s)** or **no scientist** required
### Labels ###
- [x] Select **component(s)**
- [x] Select **priority**
- [x] Select **requestor(s)**
### Projects and Milestone ###
- [x] Select **Repository** and/or **Organization** level **Project(s)** or add **alert: NEED PROJECT ASSIGNMENT** label
- [x] Select **Milestone** as the next official version or **Backlog of Development Ideas**
## Define Related Issue(s) ##
Consider the impact to the other METplus components.
- [x] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdatadb](https://github.com/dtcenter/METdatadb/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose)
Since the TC-Gen MPR line type is NOT loaded into METdatadb, I expect no impacts on the other METplus components.
## Enhancement Checklist ##
See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details.
- [ ] Complete the issue definition above, including the **Time Estimate** and **Funding Source**.
- [ ] Fork this repository or create a branch of **develop**.
Branch name: `feature_<Issue Number>_<Description>`
- [ ] Complete the development and test your changes.
- [ ] Add/update log messages for easier debugging.
- [ ] Add/update unit tests.
- [ ] Add/update documentation.
- [ ] Push local changes to GitHub.
- [ ] Submit a pull request to merge into **develop**.
Pull request: `feature <Issue Number> <Description>`
- [ ] Define the pull request metadata, as permissions allow.
Select: **Reviewer(s)** and **Linked issues**
Select: **Repository** level development cycle **Project** for the next official release
Select: **Milestone** as the next official version
- [ ] Iterate until the reviewer(s) accept and merge your changes.
- [ ] Delete your fork or branch.
- [ ] Close this issue.
| 1.0 | Enhance TC-Gen to quantify GTWO shapefile misses in space and time. - ## Describe the Enhancement ##
This feature request came via @halperin-erau based on feedback on his TC-Gen presentation at the Interdepartmental Hurricane Conference in March 2022. The requested changes apply to the verification of GTWO shapefile areas specified using the "-shape" command line option.
Each shapefile has a corresponding probability of genesis value. TC-Gen checks the BEST track data to determine whether genesis occurred within the prescribed area and during the prescribed time window. The result is binary, yes/no.
Richard Pasch at NHC asked if there was a way to verify/determine how far off some genesis forecasts might be in the event that genesis occurs just outside of the GTWO shape file. In other words, could we output something like a minimum distance from the shape file boundary to the nearest BEST genesis location within the 2/5/7-day window? Kate Musgrave at CIRA asked a similar question, but with respect to timing errors. Could we output the time of BEST genesis for events that occur within the GTWO shape file, but outside of the 2/5/7-day window?
Here are my initial thoughts for how this could be handled:
I can see how this functionality they requested could be added. I'd start by updating the logic in the [find_genshape_match()](https://github.com/dtcenter/MET/blob/a32b11b13c994814e9e8e0d2ab116ddc702e6c64/met/src/tools/tc_utils/tc_gen/tc_gen.cc#L1136) function.
To keep track of near misses, we need to...
Replace this call to [p.is_inside(x, y)](https://github.com/dtcenter/MET/blob/a32b11b13c994814e9e8e0d2ab116ddc702e6c64/met/src/tools/tc_utils/tc_gen/tc_gen.cc#L1159) to a function that would compute the minimum distance to the boundary of the polyline. That logic does already exist elsewhere in MET to find the minimum distance from a point to a series of line segments.
Define logic as to whether it's better to be closer in space or time... in what order should those be minimized?
Update the [write_pct_genmpr_row()](https://github.com/dtcenter/MET/blob/a32b11b13c994814e9e8e0d2ab116ddc702e6c64/met/src/tools/tc_utils/tc_gen/tc_gen.cc#L2236) function and output line type as needed to indicate the closest match.
### Time Estimate ###
3 days.
### Sub-Issues ###
Consider breaking the enhancement down into sub-issues.
No sub-issues needed.
### Relevant Deadlines ###
*List relevant project deadlines here or state NONE.*
### Funding Source ###
*Define the source of funding and account keys here or state NONE.*
## Define the Metadata ##
### Assignee ###
- [ ] Select **engineer(s)** or **no engineer** required
- [ ] Select **scientist(s)** or **no scientist** required
### Labels ###
- [x] Select **component(s)**
- [x] Select **priority**
- [x] Select **requestor(s)**
### Projects and Milestone ###
- [x] Select **Repository** and/or **Organization** level **Project(s)** or add **alert: NEED PROJECT ASSIGNMENT** label
- [x] Select **Milestone** as the next official version or **Backlog of Development Ideas**
## Define Related Issue(s) ##
Consider the impact to the other METplus components.
- [x] [METplus](https://github.com/dtcenter/METplus/issues/new/choose), [MET](https://github.com/dtcenter/MET/issues/new/choose), [METdatadb](https://github.com/dtcenter/METdatadb/issues/new/choose), [METviewer](https://github.com/dtcenter/METviewer/issues/new/choose), [METexpress](https://github.com/dtcenter/METexpress/issues/new/choose), [METcalcpy](https://github.com/dtcenter/METcalcpy/issues/new/choose), [METplotpy](https://github.com/dtcenter/METplotpy/issues/new/choose)
Since the TC-Gen MPR line type is NOT loaded into METdatadb, I expect no impacts on the other METplus components.
## Enhancement Checklist ##
See the [METplus Workflow](https://metplus.readthedocs.io/en/latest/Contributors_Guide/github_workflow.html) for details.
- [ ] Complete the issue definition above, including the **Time Estimate** and **Funding Source**.
- [ ] Fork this repository or create a branch of **develop**.
Branch name: `feature_<Issue Number>_<Description>`
- [ ] Complete the development and test your changes.
- [ ] Add/update log messages for easier debugging.
- [ ] Add/update unit tests.
- [ ] Add/update documentation.
- [ ] Push local changes to GitHub.
- [ ] Submit a pull request to merge into **develop**.
Pull request: `feature <Issue Number> <Description>`
- [ ] Define the pull request metadata, as permissions allow.
Select: **Reviewer(s)** and **Linked issues**
Select: **Repository** level development cycle **Project** for the next official release
Select: **Milestone** as the next official version
- [ ] Iterate until the reviewer(s) accept and merge your changes.
- [ ] Delete your fork or branch.
- [ ] Close this issue.
| priority | enhance tc gen to quantify gtwo shapefile misses in space and time describe the enhancement this feature request came via halperin erau based on feedback on his tc gen presentation at the interdepartmental hurricane conference in march the requested changes apply to the verification of gtwo shapefile areas specified using the shape command line option each shapefile has a corresponding probability of genesis value tc gen checks the best track data to determine whether genesis occurred within the prescribed area and during the prescribed time window the result is binary yes no richard pasch at nhc asked if there was a way to verify determine how far off some genesis forecasts might be in the event that genesis occurs just outside of the gtwo shape file in other words could we output something like a minimum distance from the shape file boundary to the nearest best genesis location within the day window kate musgrave at cira asked a similar question but with respect to timing errors could we output the time of best genesis for events that occur within the gtwo shape file but outside of the day window here are my initial thoughts for how this could be handled i can see how this functionality they requested could be added i d start by updating the logic in the function to keep track of near misses we need to replace this call to to a function that would compute the minimum distance to the boundary of the polyline that logic does already exist elsewhere in met to find the minimum distance from a point to a series of line segments define logic as to whether it s better to be closer in space or time in what order should those be minimized update the function and output line type as needed to indicate the closest match time estimate days sub issues consider breaking the enhancement down into sub issues no sub issues needed relevant deadlines list relevant project deadlines here or state none funding source define the source of funding and account keys here or state none define the metadata assignee select engineer s or no engineer required select scientist s or no scientist required labels select component s select priority select requestor s projects and milestone select repository and or organization level project s or add alert need project assignment label select milestone as the next official version or backlog of development ideas define related issue s consider the impact to the other metplus components since the tc gen mpr line type is not loaded into metdatadb i expect no impacts on the other metplus components enhancement checklist see the for details complete the issue definition above including the time estimate and funding source fork this repository or create a branch of develop branch name feature complete the development and test your changes add update log messages for easier debugging add update unit tests add update documentation push local changes to github submit a pull request to merge into develop pull request feature define the pull request metadata as permissions allow select reviewer s and linked issues select repository level development cycle project for the next official release select milestone as the next official version iterate until the reviewer s accept and merge your changes delete your fork or branch close this issue | 1 |
81,374 | 3,590,179,961 | IssuesEvent | 2016-02-01 03:14:05 | ESAPI/esapi-java-legacy | https://api.github.com/repos/ESAPI/esapi-java-legacy | closed | Could not set multiple cookies one by one at single request | bug imported Priority-Medium | _From [[email protected]](https://code.google.com/u/110316386907476959040/) on November 17, 2011 07:12:45_
What steps will reproduce the problem? 1. I was trying to set Multiple cookies one by one at single request. Like username, token, jsession etc.
2. Etc. response.addCookie(cookie1); response.addCookie(cookie2);
3. Then system is htting SecurityWrapperResponse.addheader() where it is calling getHttpServletResponse().setHeader() which is replacing value of header name "set-cookie". we need to use getHttpServletResponse().addheader().
public void addHeader(String name, String value) {
try {
// TODO: make stripping a global config
String strippedName = StringUtilities.stripControls(name);
String strippedValue = StringUtilities.stripControls(value);
String safeName = ESAPI.validator().getValidInput("addHeader", strippedName, "HTTPHeaderName", 20, false);
String safeValue = ESAPI.validator().getValidInput("addHeader", strippedValue, "HTTPHeaderValue", ESAPI.securityConfiguration().getMaxHttpHeaderSize(), false);
getHttpServletResponse().setHeader(safeName, safeValue);
} catch (ValidationException e) {
logger.warning(Logger.SECURITY_FAILURE, "Attempt to add invalid header denied", e);
}
} What is the expected output? What do you see instead? Expected result should be: in browser and request.getCookies() should return all the cookie value username, token etc. But instead of that, it was returning only last cookie 'jsession', not the 1st, 2nd cookies 'username', 'token'. What version of the product are you using? On what operating system? version - esapi-2.0.1.jar and esapi-2.0.rc11.jar.
OS - windows Does this issue affect only a specified browser or set of browsers? I have tried this in IE, firefox, chrome, opera, safari. This will effect in all browser. Please provide any additional information below. I am Software Engineer in Acclaris, http://www.acclaris.com/ . In our system, we are integrating ESAPI (Added filter SecurityWrapper.java) for XSS filtering attack.
I have downloaded current code base from http://owasp-esapi-java.googlecode.com/svn/trunk/ . After that I have modified code and building a fresh esapi-2.0.2-SNAPSHOT.jar using instruction from https://www.owasp.org/index.php/ESAPI-Building . Attaching patch and updated jar. Now it is working fine in our system. Please review and update SVN code base as well as https://code.google.com/p/owasp-esapi-java/downloads/list . So that, we can add updated versioned of jar in our system. Please send your feedback as early as possible.
Thanks ,
Anita
Software Engineer, Acclaris
_Original issue: http://code.google.com/p/owasp-esapi-java/issues/detail?id=254_ | 1.0 | Could not set multiple cookies one by one at single request - _From [[email protected]](https://code.google.com/u/110316386907476959040/) on November 17, 2011 07:12:45_
What steps will reproduce the problem? 1. I was trying to set Multiple cookies one by one at single request. Like username, token, jsession etc.
2. Etc. response.addCookie(cookie1); response.addCookie(cookie2);
3. Then system is htting SecurityWrapperResponse.addheader() where it is calling getHttpServletResponse().setHeader() which is replacing value of header name "set-cookie". we need to use getHttpServletResponse().addheader().
public void addHeader(String name, String value) {
try {
// TODO: make stripping a global config
String strippedName = StringUtilities.stripControls(name);
String strippedValue = StringUtilities.stripControls(value);
String safeName = ESAPI.validator().getValidInput("addHeader", strippedName, "HTTPHeaderName", 20, false);
String safeValue = ESAPI.validator().getValidInput("addHeader", strippedValue, "HTTPHeaderValue", ESAPI.securityConfiguration().getMaxHttpHeaderSize(), false);
getHttpServletResponse().setHeader(safeName, safeValue);
} catch (ValidationException e) {
logger.warning(Logger.SECURITY_FAILURE, "Attempt to add invalid header denied", e);
}
} What is the expected output? What do you see instead? Expected result should be: in browser and request.getCookies() should return all the cookie value username, token etc. But instead of that, it was returning only last cookie 'jsession', not the 1st, 2nd cookies 'username', 'token'. What version of the product are you using? On what operating system? version - esapi-2.0.1.jar and esapi-2.0.rc11.jar.
OS - windows Does this issue affect only a specified browser or set of browsers? I have tried this in IE, firefox, chrome, opera, safari. This will effect in all browser. Please provide any additional information below. I am Software Engineer in Acclaris, http://www.acclaris.com/ . In our system, we are integrating ESAPI (Added filter SecurityWrapper.java) for XSS filtering attack.
I have downloaded current code base from http://owasp-esapi-java.googlecode.com/svn/trunk/ . After that I have modified code and building a fresh esapi-2.0.2-SNAPSHOT.jar using instruction from https://www.owasp.org/index.php/ESAPI-Building . Attaching patch and updated jar. Now it is working fine in our system. Please review and update SVN code base as well as https://code.google.com/p/owasp-esapi-java/downloads/list . So that, we can add updated versioned of jar in our system. Please send your feedback as early as possible.
Thanks ,
Anita
Software Engineer, Acclaris
_Original issue: http://code.google.com/p/owasp-esapi-java/issues/detail?id=254_ | priority | could not set multiple cookies one by one at single request from on november what steps will reproduce the problem i was trying to set multiple cookies one by one at single request like username token jsession etc etc response addcookie response addcookie then system is htting securitywrapperresponse addheader where it is calling gethttpservletresponse setheader which is replacing value of header name set cookie we need to use gethttpservletresponse addheader public void addheader string name string value try todo make stripping a global config string strippedname stringutilities stripcontrols name string strippedvalue stringutilities stripcontrols value string safename esapi validator getvalidinput addheader strippedname httpheadername false string safevalue esapi validator getvalidinput addheader strippedvalue httpheadervalue esapi securityconfiguration getmaxhttpheadersize false gethttpservletresponse setheader safename safevalue catch validationexception e logger warning logger security failure attempt to add invalid header denied e what is the expected output what do you see instead expected result should be in browser and request getcookies should return all the cookie value username token etc but instead of that it was returning only last cookie jsession not the cookies username token what version of the product are you using on what operating system version esapi jar and esapi jar os windows does this issue affect only a specified browser or set of browsers i have tried this in ie firefox chrome opera safari this will effect in all browser please provide any additional information below i am software engineer in acclaris in our system we are integrating esapi added filter securitywrapper java for xss filtering attack i have downloaded current code base from after that i have modified code and building a fresh esapi snapshot jar using instruction from attaching patch and updated jar now it is working fine in our system please review and update svn code base as well as so that we can add updated versioned of jar in our system please send your feedback as early as possible thanks anita software engineer acclaris original issue | 1 |
392,685 | 11,594,509,514 | IssuesEvent | 2020-02-24 15:27:05 | ShabadOS/desktop | https://api.github.com/repos/ShabadOS/desktop | opened | Add source modifications / Ability to turn on or off pauri numbers in translations | Priority: 2 Medium Status: In Research Type: Feature/Enhancement | How to deal with this for overlay vs local displays, since overlay doesn't have any settings for source config etc? | 1.0 | Add source modifications / Ability to turn on or off pauri numbers in translations - How to deal with this for overlay vs local displays, since overlay doesn't have any settings for source config etc? | priority | add source modifications ability to turn on or off pauri numbers in translations how to deal with this for overlay vs local displays since overlay doesn t have any settings for source config etc | 1 |
393,298 | 11,613,132,030 | IssuesEvent | 2020-02-26 10:11:51 | EiT-Computer-Vision/velocity-estimation | https://api.github.com/repos/EiT-Computer-Vision/velocity-estimation | closed | Create a database to store and retrieve the raw video data from | feature medium priority | The video files are quite large, so setting up a database to store them would be convenient. Can be done on NTNU's MySQL database: https://innsida.ntnu.no/wiki/-/wiki/English/Using+MySQL+at+NTNU | 1.0 | Create a database to store and retrieve the raw video data from - The video files are quite large, so setting up a database to store them would be convenient. Can be done on NTNU's MySQL database: https://innsida.ntnu.no/wiki/-/wiki/English/Using+MySQL+at+NTNU | priority | create a database to store and retrieve the raw video data from the video files are quite large so setting up a database to store them would be convenient can be done on ntnu s mysql database | 1 |
342,525 | 10,318,658,983 | IssuesEvent | 2019-08-30 15:27:08 | ralna/RALFit | https://api.github.com/repos/ralna/RALFit | closed | Add fallback to `calculate_step` | Medium Priority | Consider adding a fallback call for `type_of_method = 2` and `nlls_method = 3|4`
if `call regularization_solver` fails, then don't terminate with error but try `call solve_galahad`.
https://github.com/ralna/RALFit/blob/b9f438c25874379a71922f7c18f352722d7cf55c/libRALFit/src/ral_nlls_internal.f90#L1192-L1207
The issue is raised when for some reason Delta is such that `Ashift = A + 1/Delta` does not guarantee `Ashift` is SPD and the call to `SPD_solve` fails. | 1.0 | Add fallback to `calculate_step` - Consider adding a fallback call for `type_of_method = 2` and `nlls_method = 3|4`
if `call regularization_solver` fails, then don't terminate with error but try `call solve_galahad`.
https://github.com/ralna/RALFit/blob/b9f438c25874379a71922f7c18f352722d7cf55c/libRALFit/src/ral_nlls_internal.f90#L1192-L1207
The issue is raised when for some reason Delta is such that `Ashift = A + 1/Delta` does not guarantee `Ashift` is SPD and the call to `SPD_solve` fails. | priority | add fallback to calculate step consider adding a fallback call for type of method and nlls method if call regularization solver fails then don t terminate with error but try call solve galahad the issue is raised when for some reason delta is such that ashift a delta does not guarantee ashift is spd and the call to spd solve fails | 1 |
684,221 | 23,411,657,235 | IssuesEvent | 2022-08-12 18:15:05 | phylum-dev/phylum-ci | https://api.github.com/repos/phylum-dev/phylum-ci | closed | `Issue Summary` data missing for vulnerability domain | bug medium priority | ## Describe the bug
`Issue Summary` data seems to be missing for vulnerabilities.
## To Reproduce
Steps to reproduce the behavior:
1. Submit failing analysis that contains a vuln
2. Observe the MR/PR comment with missing `Issue Summary` data
## Expected behavior
`Issue Summary` data is populated for all issues.
## Screenshots
![image](https://user-images.githubusercontent.com/34108612/184266168-14d69509-7257-4ad7-a75a-1120770dd750.png)
## Additional context
I check a license issue and the summary data was present, but I did not check other risk domains. There is a chance this is only impact vulnerability issues.
| 1.0 | `Issue Summary` data missing for vulnerability domain - ## Describe the bug
`Issue Summary` data seems to be missing for vulnerabilities.
## To Reproduce
Steps to reproduce the behavior:
1. Submit failing analysis that contains a vuln
2. Observe the MR/PR comment with missing `Issue Summary` data
## Expected behavior
`Issue Summary` data is populated for all issues.
## Screenshots
![image](https://user-images.githubusercontent.com/34108612/184266168-14d69509-7257-4ad7-a75a-1120770dd750.png)
## Additional context
I check a license issue and the summary data was present, but I did not check other risk domains. There is a chance this is only impact vulnerability issues.
| priority | issue summary data missing for vulnerability domain describe the bug issue summary data seems to be missing for vulnerabilities to reproduce steps to reproduce the behavior submit failing analysis that contains a vuln observe the mr pr comment with missing issue summary data expected behavior issue summary data is populated for all issues screenshots additional context i check a license issue and the summary data was present but i did not check other risk domains there is a chance this is only impact vulnerability issues | 1 |
538,094 | 15,762,392,519 | IssuesEvent | 2021-03-31 11:01:59 | kymckay/f21as-project | https://api.github.com/repos/kymckay/f21as-project | closed | Potential for concurrent modification error | priority/medium type/bug | Currently the `QueueGUI` gets the list of the `SharedQueue` and iterates through it to update the display. This could technically produce a concurrent modification error (like #99) if a producer were to add a new item or consumer were to remove one during the update.
It probably doesn't happen often due to the delays in our threads and how they line up relative to the points at which the GUI observer is notified to update, but we should probably still fix it. | 1.0 | Potential for concurrent modification error - Currently the `QueueGUI` gets the list of the `SharedQueue` and iterates through it to update the display. This could technically produce a concurrent modification error (like #99) if a producer were to add a new item or consumer were to remove one during the update.
It probably doesn't happen often due to the delays in our threads and how they line up relative to the points at which the GUI observer is notified to update, but we should probably still fix it. | priority | potential for concurrent modification error currently the queuegui gets the list of the sharedqueue and iterates through it to update the display this could technically produce a concurrent modification error like if a producer were to add a new item or consumer were to remove one during the update it probably doesn t happen often due to the delays in our threads and how they line up relative to the points at which the gui observer is notified to update but we should probably still fix it | 1 |
419,479 | 12,224,015,842 | IssuesEvent | 2020-05-02 20:20:13 | hochreutenerl/camp-dictionary | https://api.github.com/repos/hochreutenerl/camp-dictionary | opened | Ideen Struktur | enhancement feedback medium priority medium task | Die Hauptkategorien sind relativ breit. Man könnte die Begriffe aber noch taggen und dort feingranularer arbeiten: z. B. Kartenkunde, Bäume, Getreide, Holzbearbeitung, Metallbearbeitung, Krankheitssymptome, Krankheiten, Mannschaftssportarten, … | 1.0 | Ideen Struktur - Die Hauptkategorien sind relativ breit. Man könnte die Begriffe aber noch taggen und dort feingranularer arbeiten: z. B. Kartenkunde, Bäume, Getreide, Holzbearbeitung, Metallbearbeitung, Krankheitssymptome, Krankheiten, Mannschaftssportarten, … | priority | ideen struktur die hauptkategorien sind relativ breit man könnte die begriffe aber noch taggen und dort feingranularer arbeiten z b kartenkunde bäume getreide holzbearbeitung metallbearbeitung krankheitssymptome krankheiten mannschaftssportarten … | 1 |
677,345 | 23,159,285,346 | IssuesEvent | 2022-07-29 15:54:02 | Cotalker/documentation | https://api.github.com/repos/Cotalker/documentation | closed | Bug report: Inconsistency between channel and task detail information | Bug report Bug medium priority | ### Affected system
Cotalker Web Application
### Affected system (other)
_No response_
### Affected environment
Production
### Affected environment (other)
_No response_
### App version
17.7.1
### Details
We have two tasks (Purchase Order and Asset Management) with a note on each one. Sometimes, if we click the task detail to access the note, we see the note of the other task.
![image](https://user-images.githubusercontent.com/34345309/151985362-799945e6-d7d6-40f9-bf4f-b3d6282d664f.png)
### Steps to reproduce
Clicking between two tasks on the same SM with a note on each, we see this behavior of seeing the note of the other task.
### Expected result
We should always see the correct note on the task detail information.
### Additional data
_No response_ | 1.0 | Bug report: Inconsistency between channel and task detail information - ### Affected system
Cotalker Web Application
### Affected system (other)
_No response_
### Affected environment
Production
### Affected environment (other)
_No response_
### App version
17.7.1
### Details
We have two tasks (Purchase Order and Asset Management) with a note on each one. Sometimes, if we click the task detail to access the note, we see the note of the other task.
![image](https://user-images.githubusercontent.com/34345309/151985362-799945e6-d7d6-40f9-bf4f-b3d6282d664f.png)
### Steps to reproduce
Clicking between two tasks on the same SM with a note on each, we see this behavior of seeing the note of the other task.
### Expected result
We should always see the correct note on the task detail information.
### Additional data
_No response_ | priority | bug report inconsistency between channel and task detail information affected system cotalker web application affected system other no response affected environment production affected environment other no response app version details we have two tasks purchase order and asset management with a note on each one sometimes if we click the task detail to access the note we see the note of the other task steps to reproduce clicking between two tasks on the same sm with a note on each we see this behavior of seeing the note of the other task expected result we should always see the correct note on the task detail information additional data no response | 1 |
397,174 | 11,724,704,586 | IssuesEvent | 2020-03-10 11:29:37 | stats4sd/stats4sd-site | https://api.github.com/repos/stats4sd/stats4sd-site | closed | Reviewer's status (authorised, not authorised etc) should appear in trip 'review' form | Priority: Medium Type:Enhancement | To let risk assessment team know the status of the decision-making. | 1.0 | Reviewer's status (authorised, not authorised etc) should appear in trip 'review' form - To let risk assessment team know the status of the decision-making. | priority | reviewer s status authorised not authorised etc should appear in trip review form to let risk assessment team know the status of the decision making | 1 |
680,887 | 23,288,499,781 | IssuesEvent | 2022-08-05 19:22:06 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [YSQL] `pg_ls_tmpdir()` does not exist | kind/new-feature area/ysql priority/medium | Jira Link: [DB-3046](https://yugabyte.atlassian.net/browse/DB-3046)
### Description
```
yugabyte=# SELECT * from pg_ls_tmpdir();
ERROR: function pg_ls_tmpdir() does not exist
LINE 1: SELECT * from pg_ls_tmpdir();
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
```
Note: `pg_ls_tmpdir()` was introduced in Postgres 12. | 1.0 | [YSQL] `pg_ls_tmpdir()` does not exist - Jira Link: [DB-3046](https://yugabyte.atlassian.net/browse/DB-3046)
### Description
```
yugabyte=# SELECT * from pg_ls_tmpdir();
ERROR: function pg_ls_tmpdir() does not exist
LINE 1: SELECT * from pg_ls_tmpdir();
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
```
Note: `pg_ls_tmpdir()` was introduced in Postgres 12. | priority | pg ls tmpdir does not exist jira link description yugabyte select from pg ls tmpdir error function pg ls tmpdir does not exist line select from pg ls tmpdir hint no function matches the given name and argument types you might need to add explicit type casts note pg ls tmpdir was introduced in postgres | 1 |
187,897 | 6,762,203,114 | IssuesEvent | 2017-10-25 06:51:45 | opencurrents/opencurrents | https://api.github.com/repos/opencurrents/opencurrents | reopened | Chash Out: Amount of Cash needs to be displayed in the confirmation. | mvp priority low priority medium | I suggest this always equal the total amount in the balance. - Change button to "Send Dollars" rather than "Send Cash".
![image](https://user-images.githubusercontent.com/26234440/31856835-d2ad52c4-b691-11e7-916a-0659c1414276.png)
| 2.0 | Chash Out: Amount of Cash needs to be displayed in the confirmation. - I suggest this always equal the total amount in the balance. - Change button to "Send Dollars" rather than "Send Cash".
![image](https://user-images.githubusercontent.com/26234440/31856835-d2ad52c4-b691-11e7-916a-0659c1414276.png)
| priority | chash out amount of cash needs to be displayed in the confirmation i suggest this always equal the total amount in the balance change button to send dollars rather than send cash | 1 |
274,632 | 8,563,637,559 | IssuesEvent | 2018-11-09 14:37:45 | CS2113-AY1819S1-T16-4/main | https://api.github.com/repos/CS2113-AY1819S1-T16-4/main | closed | As a HR Staff, I want to have shorter commands | priority.medium type.story | so that i can reduce the amount of typing required
| 1.0 | As a HR Staff, I want to have shorter commands - so that i can reduce the amount of typing required
| priority | as a hr staff i want to have shorter commands so that i can reduce the amount of typing required | 1 |
531,954 | 15,527,812,682 | IssuesEvent | 2021-03-13 07:58:14 | containrrr/watchtower | https://api.github.com/repos/containrrr/watchtower | closed | v1.1.6 ignores port when sending to Gotify | Priority: Medium Status: Available Type: Bug | **Describe the bug**
The port is being ignored when sending notification to Gotify
**Environment**
```
Client: Docker Engine - Community
Version: 20.10.2
API version: 1.41
Go version: go1.13.15
Git commit: 2291f61
Built: Mon Dec 28 16:18:13 2020
OS/Arch: linux/arm
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.2
API version: 1.41 (minimum version 1.12)
Go version: go1.13.15
Git commit: 8891c58
Built: Mon Dec 28 16:15:48 2020
OS/Arch: linux/arm
Experimental: false
containerd:
Version: 1.4.3
GitCommit: 269548fa27e0089a8b8278fc4fc781d7f65a939b
runc:
Version: 1.0.0-rc92
GitCommit: ff819c7e9184c13b7c2607fe6c30ae19403a7aff
docker-init:
Version: 0.19.0
GitCommit: de40ad0
```
```
`.::///+:/-. --///+//-:`` pi@raspberrypi
`+oooooooooooo: `+oooooooooooo: --------------
/oooo++//ooooo: ooooo+//+ooooo. OS: Raspbian GNU/Linux 10 (buster) armv7l
`+ooooooo:-:oo- +o+::/ooooooo: Host: Raspberry Pi 4 Model B Rev 1.4
`:oooooooo+`` `.oooooooo+- Kernel: 5.4.79-v7l+
`:++ooo/. :+ooo+/.` Uptime: 8 days, 29 mins
...` `.----.` ``.. Packages: 1872 (dpkg)
.::::-``:::::::::.`-:::-` Shell: bash 5.0.3
-:::-` .:::::::-` `-:::- Terminal: /dev/pts/1
`::. `.--.` `` `.---.``.::` CPU: BCM2711 (4) @ 1.500GHz
.::::::::` -::::::::` ` Memory: 855MiB / 7874MiB
.::` .:::::::::- `::::::::::``::.
-:::` ::::::::::. ::::::::::.`:::-
:::: -::::::::. `-:::::::: ::::
-::- .-:::-.``....``.-::-. -::-
.. `` .::::::::. `..`..
-:::-` -::::::::::` .:::::`
:::::::` -::::::::::` :::::::.
.::::::: -::::::::. ::::::::
`-:::::` ..--.` ::::::.
`...` `...--..` `...`
.::::::::::
`.-::::-`
```
**Logs**
```
Failed to send notification via shoutrrr (url=gotify://192.168.1.175:8188/removed):
failed to send notification to Gotify: Post "https://192.168.1.175/message?token=removed":
dial tcp 192.168.1.175:443: connect: connection refused,
```
**Additional context**
Working example using httpie CLI
```
http -f POST "http://192.168.1.175:8188/message?token=Removed" title="my title" message="my message" priority="5"
HTTP/1.1 200 OK
Content-Length: 114
Content-Type: application/json
Date: Wed, 13 Jan 2021 10:49:48 GMT
{
"appid": 1,
"date": "2021-01-13T10:49:48.068486043Z",
"id": 30,
"message": "my message",
"priority": 5,
"title": "my title"
}
```
| 1.0 | v1.1.6 ignores port when sending to Gotify - **Describe the bug**
The port is being ignored when sending notification to Gotify
**Environment**
```
Client: Docker Engine - Community
Version: 20.10.2
API version: 1.41
Go version: go1.13.15
Git commit: 2291f61
Built: Mon Dec 28 16:18:13 2020
OS/Arch: linux/arm
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.2
API version: 1.41 (minimum version 1.12)
Go version: go1.13.15
Git commit: 8891c58
Built: Mon Dec 28 16:15:48 2020
OS/Arch: linux/arm
Experimental: false
containerd:
Version: 1.4.3
GitCommit: 269548fa27e0089a8b8278fc4fc781d7f65a939b
runc:
Version: 1.0.0-rc92
GitCommit: ff819c7e9184c13b7c2607fe6c30ae19403a7aff
docker-init:
Version: 0.19.0
GitCommit: de40ad0
```
```
`.::///+:/-. --///+//-:`` pi@raspberrypi
`+oooooooooooo: `+oooooooooooo: --------------
/oooo++//ooooo: ooooo+//+ooooo. OS: Raspbian GNU/Linux 10 (buster) armv7l
`+ooooooo:-:oo- +o+::/ooooooo: Host: Raspberry Pi 4 Model B Rev 1.4
`:oooooooo+`` `.oooooooo+- Kernel: 5.4.79-v7l+
`:++ooo/. :+ooo+/.` Uptime: 8 days, 29 mins
...` `.----.` ``.. Packages: 1872 (dpkg)
.::::-``:::::::::.`-:::-` Shell: bash 5.0.3
-:::-` .:::::::-` `-:::- Terminal: /dev/pts/1
`::. `.--.` `` `.---.``.::` CPU: BCM2711 (4) @ 1.500GHz
.::::::::` -::::::::` ` Memory: 855MiB / 7874MiB
.::` .:::::::::- `::::::::::``::.
-:::` ::::::::::. ::::::::::.`:::-
:::: -::::::::. `-:::::::: ::::
-::- .-:::-.``....``.-::-. -::-
.. `` .::::::::. `..`..
-:::-` -::::::::::` .:::::`
:::::::` -::::::::::` :::::::.
.::::::: -::::::::. ::::::::
`-:::::` ..--.` ::::::.
`...` `...--..` `...`
.::::::::::
`.-::::-`
```
**Logs**
```
Failed to send notification via shoutrrr (url=gotify://192.168.1.175:8188/removed):
failed to send notification to Gotify: Post "https://192.168.1.175/message?token=removed":
dial tcp 192.168.1.175:443: connect: connection refused,
```
**Additional context**
Working example using httpie CLI
```
http -f POST "http://192.168.1.175:8188/message?token=Removed" title="my title" message="my message" priority="5"
HTTP/1.1 200 OK
Content-Length: 114
Content-Type: application/json
Date: Wed, 13 Jan 2021 10:49:48 GMT
{
"appid": 1,
"date": "2021-01-13T10:49:48.068486043Z",
"id": 30,
"message": "my message",
"priority": 5,
"title": "my title"
}
```
| priority | ignores port when sending to gotify describe the bug the port is being ignored when sending notification to gotify environment client docker engine community version api version go version git commit built mon dec os arch linux arm context default experimental true server docker engine community engine version api version minimum version go version git commit built mon dec os arch linux arm experimental false containerd version gitcommit runc version gitcommit docker init version gitcommit pi raspberrypi oooooooooooo oooooooooooo oooo ooooo ooooo ooooo os raspbian gnu linux buster ooooooo oo o ooooooo host raspberry pi model b rev oooooooo oooooooo kernel ooo ooo uptime days mins packages dpkg shell bash terminal dev pts cpu memory logs failed to send notification via shoutrrr url gotify removed failed to send notification to gotify post dial tcp connect connection refused additional context working example using httpie cli http f post title my title message my message priority http ok content length content type application json date wed jan gmt appid date id message my message priority title my title | 1 |
532,607 | 15,559,946,224 | IssuesEvent | 2021-03-16 12:09:02 | radical-cybertools/radical.pilot | https://api.github.com/repos/radical-cybertools/radical.pilot | opened | missing cb on task CANCELED state | comp:umgr priority:medium topic:api type:bug | ```py
#!/usr/bin/env python3
import radical.pilot as rp
# ------------------------------------------------------------------------------
#
if __name__ == '__main__':
session = rp.Session()
try:
pmgr = rp.PilotManager(session=session)
pdesc = rp.PilotDescription({'resource': 'local.localhost',
'runtime' : 60,
'cores' : 64})
pilot = pmgr.submit_pilots([pdesc])
tmgr = rp.TaskManager(session=session)
# -----------------------------
def cb(task, state):
print(task.uid, task.state)
# -----------------------------
tmgr.register_callback(cb)
tmgr.add_pilots(pilot)
td = rp.TaskDescription({'executable': '/bin/sleep',
'arguments' : ['100']})
task = tmgr.submit_tasks(td)
task.wait(state=rp.AGENT_STAGING_INPUT_PENDING)
task.cancel()
task.wait()
finally:
session.close(download=False)
# ------------------------------------------------------------------------------
```
output:
```sh
$ ./t.py
new session: [rp.session.rivendell.merzky.018702.0022] \
database : [mongodb://localhost/am] ok
create pilot manager ok
submit 1 pilot(s)
pilot.0000 local.localhost 64 cores 0 gpus ok
create task manager ok
submit: ########################################################################
task.000000 TMGR_SCHEDULING_PENDING
task.000000 TMGR_SCHEDULING
task.000000 TMGR_STAGING_INPUT_PENDING
task.000000 TMGR_STAGING_INPUT
task.000000 AGENT_STAGING_INPUT_PENDING
wait : task.000000 CANCELED
########################################################################
CANCELED : 1
ok
closing session rp.session.rivendell.merzky.018702.0022 \
close task manager ok
close pilot manager \
wait for 1 pilot(s)
0 ok
ok
session lifetime: 28.2s ok
``` | 1.0 | missing cb on task CANCELED state - ```py
#!/usr/bin/env python3
import radical.pilot as rp
# ------------------------------------------------------------------------------
#
if __name__ == '__main__':
session = rp.Session()
try:
pmgr = rp.PilotManager(session=session)
pdesc = rp.PilotDescription({'resource': 'local.localhost',
'runtime' : 60,
'cores' : 64})
pilot = pmgr.submit_pilots([pdesc])
tmgr = rp.TaskManager(session=session)
# -----------------------------
def cb(task, state):
print(task.uid, task.state)
# -----------------------------
tmgr.register_callback(cb)
tmgr.add_pilots(pilot)
td = rp.TaskDescription({'executable': '/bin/sleep',
'arguments' : ['100']})
task = tmgr.submit_tasks(td)
task.wait(state=rp.AGENT_STAGING_INPUT_PENDING)
task.cancel()
task.wait()
finally:
session.close(download=False)
# ------------------------------------------------------------------------------
```
output:
```sh
$ ./t.py
new session: [rp.session.rivendell.merzky.018702.0022] \
database : [mongodb://localhost/am] ok
create pilot manager ok
submit 1 pilot(s)
pilot.0000 local.localhost 64 cores 0 gpus ok
create task manager ok
submit: ########################################################################
task.000000 TMGR_SCHEDULING_PENDING
task.000000 TMGR_SCHEDULING
task.000000 TMGR_STAGING_INPUT_PENDING
task.000000 TMGR_STAGING_INPUT
task.000000 AGENT_STAGING_INPUT_PENDING
wait : task.000000 CANCELED
########################################################################
CANCELED : 1
ok
closing session rp.session.rivendell.merzky.018702.0022 \
close task manager ok
close pilot manager \
wait for 1 pilot(s)
0 ok
ok
session lifetime: 28.2s ok
``` | priority | missing cb on task canceled state py usr bin env import radical pilot as rp if name main session rp session try pmgr rp pilotmanager session session pdesc rp pilotdescription resource local localhost runtime cores pilot pmgr submit pilots tmgr rp taskmanager session session def cb task state print task uid task state tmgr register callback cb tmgr add pilots pilot td rp taskdescription executable bin sleep arguments task tmgr submit tasks td task wait state rp agent staging input pending task cancel task wait finally session close download false output sh t py new session database ok create pilot manager ok submit pilot s pilot local localhost cores gpus ok create task manager ok submit task tmgr scheduling pending task tmgr scheduling task tmgr staging input pending task tmgr staging input task agent staging input pending wait task canceled canceled ok closing session rp session rivendell merzky close task manager ok close pilot manager wait for pilot s ok ok session lifetime ok | 1 |
313,198 | 9,558,203,024 | IssuesEvent | 2019-05-03 13:40:44 | conan-io/conan | https://api.github.com/repos/conan-io/conan | closed | cppstd as a subsetting | complex: medium priority: high stage: review type: feature | We considered better to have the cppstd as a subsetting of every compiler:
- If you don't specify the -s compiler.cppstd=XXX it won't change anything, because will come with a None default value.
- We won't break anything, the global setting will be kept, but deprecated (comment at settings.yml and docs).
- So any recipe could receive the subsetting and compile with a different version of the language automatically.
- The reasons to do a first level setting was to not generate new binary IDS for C libraries that doesn't remove the subsetting. We consider more important to be "injectable" at any recipe.
- Conan will fail if both are specified.
- Conan will try internally to manage the value of the new one, copying from the old one if possible (to ease the code and future deprecation of the global setting)
- Each compiler will have their own values, it has a lot of sense.
| 1.0 | cppstd as a subsetting - We considered better to have the cppstd as a subsetting of every compiler:
- If you don't specify the -s compiler.cppstd=XXX it won't change anything, because will come with a None default value.
- We won't break anything, the global setting will be kept, but deprecated (comment at settings.yml and docs).
- So any recipe could receive the subsetting and compile with a different version of the language automatically.
- The reasons to do a first level setting was to not generate new binary IDS for C libraries that doesn't remove the subsetting. We consider more important to be "injectable" at any recipe.
- Conan will fail if both are specified.
- Conan will try internally to manage the value of the new one, copying from the old one if possible (to ease the code and future deprecation of the global setting)
- Each compiler will have their own values, it has a lot of sense.
| priority | cppstd as a subsetting we considered better to have the cppstd as a subsetting of every compiler if you don t specify the s compiler cppstd xxx it won t change anything because will come with a none default value we won t break anything the global setting will be kept but deprecated comment at settings yml and docs so any recipe could receive the subsetting and compile with a different version of the language automatically the reasons to do a first level setting was to not generate new binary ids for c libraries that doesn t remove the subsetting we consider more important to be injectable at any recipe conan will fail if both are specified conan will try internally to manage the value of the new one copying from the old one if possible to ease the code and future deprecation of the global setting each compiler will have their own values it has a lot of sense | 1 |
696,144 | 23,886,869,243 | IssuesEvent | 2022-09-08 08:23:02 | netdata/netdata-cloud | https://api.github.com/repos/netdata/netdata-cloud | closed | [BUG] internal server errors in data requests should be descriptive | bug internal submit priority/medium visualizations-team stability | ERROR: type should be string, got "\r\nhttps://app.netdata.cloud/api/v2/spaces/31a2fba1-8ee2-4fa0-9fed-bb437e3fb75c/rooms/9e949183-e2a6-44a5-88df-deb4f3f4271e/data\r\n\r\nRequest payload:\r\n\r\n```json\r\n{\r\n\t\"filter\": {\r\n\t\t\"nodeIDs\": [\r\n\t\t\t\"19f433b0-a9a8-4e0f-b720-485b4178dd52\",\r\n\t\t\t\"1a031558-2082-48de-a423-3c3807288982\",\r\n\t\t\t\"1b48aa88-55c6-4a1a-9fc7-6e72e77abbd6\",\r\n\t\t\t\"25087e88-fd41-4ffd-a1e3-71a7cfa27e24\",\r\n\t\t\t\"36eab6e6-f998-4b76-b6f8-8eff0c35b2e1\",\r\n\t\t\t\"4561d8ae-bbf5-462f-9ab6-af7b53faac6d\",\r\n\t\t\t\"53a2987e-6edf-4b5f-a043-8c83697f6b5e\",\r\n\t\t\t\"6b99aad5-df7f-456f-b2af-cbaf1693f3e3\",\r\n\t\t\t\"747e73ed-bfa6-4b15-bebc-d5d9342069b6\",\r\n\t\t\t\"ad2a8a1c-22c3-4e6e-b4ee-2550e8a6c060\",\r\n\t\t\t\"bcfa3d5e-befb-4c60-948a-9b1de3d71fa5\",\r\n\t\t\t\"d59508f8-a243-49a5-8bd6-bfecbc6ddb4c\",\r\n\t\t\t\"de74ab82-c404-4a75-bb89-a3dbebad5582\",\r\n\t\t\t\"e0fc8a43-5aa5-427a-bac9-6748e4ebd815\",\r\n\t\t\t\"f36ba8a5-3f10-4a74-936c-fac35c4b992e\",\r\n\t\t\t\"f44405a5-3b16-412c-bca2-2dbacb300370\"\r\n\t\t],\r\n\t\t\"context\": \"system.ram\",\r\n\t\t\"dimensions\": [\r\n\t\t\t\"used\",\r\n\t\t\t\"buffers\",\r\n\t\t\t\"active\",\r\n\t\t\t\"wired\"\r\n\t\t]\r\n\t},\r\n\t\"aggregations\": [{\r\n\t\t\t\"method\": \"avg\",\r\n\t\t\t\"groupBy\": [\r\n\t\t\t\t\"dimension\"\r\n\t\t\t]\r\n\t\t}\r\n\t],\r\n\t\"agent_options\": [\r\n\t\t\"absolute\",\r\n\t\t\"percentage\",\r\n\t\t\"jsonwrap\",\r\n\t\t\"nonzero\",\r\n\t\t\"flip\",\r\n\t\t\"ms\"\r\n\t],\r\n\t\"points\": 13,\r\n\t\"format\": \"array\",\r\n\t\"group\": \"average\",\r\n\t\"gtime\": 0,\r\n\t\"after\": 1648012980,\r\n\t\"before\": 1648018380\r\n}\r\n```\r\n\r\nResponse:\r\n\r\n```json\r\n{\r\n\t\"api\": 1,\r\n\t\"id\": \"system.ram\",\r\n\t\"name\": \"system.ram\",\r\n\t\"view_update_every\": 415,\r\n\t\"first_entry\": 1647605480,\r\n\t\"last_entry\": 1648026885,\r\n\t\"after\": 1648012726,\r\n\t\"before\": 1648018120,\r\n\t\"min\": 1.698190554545455,\r\n\t\"max\": 16.03799108181818,\r\n\t\"dimension_names\": [\r\n\t\t\"buffers\",\r\n\t\t\"used\"\r\n\t],\r\n\t\"dimension_ids\": [\r\n\t\t\"buffers\",\r\n\t\t\"used\"\r\n\t],\r\n\t\"view_latest_values\": [1.6997235363636363,16.03799108181818\r\n\t],\r\n\t\"dimensions\": 2,\r\n\t\"points\": 13,\r\n\t\"format\": \"json\",\r\n\t\"result\": {\r\n\t\t\"labels\": [\r\n\t\t\t\"time\",\r\n\t\t\t\"buffers\",\r\n\t\t\t\"used\"\r\n\t\t],\r\n\t\t\"data\": [[1648013140000,1.6988688363636362,15.689459827272726\r\n\t\t\t],[1648013555000,1.6989949545454544,15.737082318181818\r\n\t\t\t],[1648013970000,1.6983231363636364,15.772068690909093\r\n\t\t\t],[1648014385000,1.698190554545455,15.777803909090908\r\n\t\t\t],[1648014800000,1.6983587454545457,15.787101727272727\r\n\t\t\t],[1648015215000,1.6984849545454546,15.743213390909093\r\n\t\t\t],[1648015630000,1.698705209090909,15.90895988181818\r\n\t\t\t],[1648016045000,1.6990130818181817,15.841612845454547\r\n\t\t\t],[1648016460000,1.699146509090909,15.869238972727276\r\n\t\t\t],[1648016875000,1.6992821636363635,15.785360199999998\r\n\t\t\t],[1648017290000,1.6994099363636364,15.789540336363636\r\n\t\t\t],[1648017705000,1.699541627272727,15.925925345454546\r\n\t\t\t],[1648018120000,1.6997235363636363,16.03799108181818\r\n\t\t\t]\r\n\t\t]\r\n\t},\r\n\t\"nodes\": [{\r\n\t\t\t\"id\": \"19f433b0-a9a8-4e0f-b720-485b4178dd52\",\r\n\t\t\t\"latency\": \"507.896244ms\",\r\n\t\t\t\"chartIDs\": [\r\n\t\t\t\t\"system.ram\"\r\n\t\t\t],\r\n\t\t\t\"hops\": 1,\r\n\t\t\t\"coverage\": {\r\n\t\t\t\t\"score\": 1,\r\n\t\t\t\t\"issues\": [\r\n\t\t\t\t]\r\n\t\t\t}\r\n\t\t},{\r\n\t\t\t\"id\": \"1a031558-2082-48de-a423-3c3807288982\",\r\n\t\t\t\"latency\": \"819.013017ms\",\r\n\t\t\t\"chartIDs\": [\r\n\t\t\t\t\"system.ram\"\r\n\t\t\t],\r\n\t\t\t\"hops\": 1,\r\n\t\t\t\"coverage\": {\r\n\t\t\t\t\"score\": 1,\r\n\t\t\t\t\"issues\": [\r\n\t\t\t\t]\r\n\t\t\t}\r\n\t\t},{\r\n\t\t\t\"id\": \"1b48aa88-55c6-4a1a-9fc7-6e72e77abbd6\",\r\n\t\t\t\"latency\": \"703.186707ms\",\r\n\t\t\t\"chartIDs\": [\r\n\t\t\t\t\"system.ram\"\r\n\t\t\t],\r\n\t\t\t\"hops\": 1,\r\n\t\t\t\"coverage\": {\r\n\t\t\t\t\"score\": 1,\r\n\t\t\t\t\"issues\": [\r\n\t\t\t\t]\r\n\t\t\t}\r\n\t\t},{\r\n\t\t\t\"id\": \"25087e88-fd41-4ffd-a1e3-71a7cfa27e24\",\r\n\t\t\t\"latency\": \"75.718061ms\",\r\n\t\t\t\"chartIDs\": [\r\n\t\t\t\t\"system.ram\"\r\n\t\t\t],\r\n\t\t\t\"hops\": 1,\r\n\t\t\t\"coverage\": {\r\n\t\t\t\t\"score\": 1,\r\n\t\t\t\t\"issues\": [\r\n\t\t\t\t]\r\n\t\t\t}\r\n\t\t},{\r\n\t\t\t\"id\": \"36eab6e6-f998-4b76-b6f8-8eff0c35b2e1\",\r\n\t\t\t\"latency\": \"0s\",\r\n\t\t\t\"error\": {\r\n\t\t\t\t\"errorMsgKey\": \"ErrInternal\",\r\n\t\t\t\t\"errorMessage\": \"Internal Server Error\",\r\n\t\t\t\t\"errorCode\": \"uohY73Xcw4-55377481\"\r\n\t\t\t},\r\n\t\t\t\"chartIDs\": [\r\n\t\t\t],\r\n\t\t\t\"hops\": 0\r\n\t\t},{\r\n\t\t\t\"id\": \"4561d8ae-bbf5-462f-9ab6-af7b53faac6d\",\r\n\t\t\t\"latency\": \"38.416387ms\",\r\n\t\t\t\"chartIDs\": [\r\n\t\t\t\t\"system.ram\"\r\n\t\t\t],\r\n\t\t\t\"hops\": 1,\r\n\t\t\t\"coverage\": {\r\n\t\t\t\t\"score\": 1,\r\n\t\t\t\t\"issues\": [\r\n\t\t\t\t]\r\n\t\t\t}\r\n\t\t},{\r\n\t\t\t\"id\": \"53a2987e-6edf-4b5f-a043-8c83697f6b5e\",\r\n\t\t\t\"latency\": \"39.488439ms\",\r\n\t\t\t\"chartIDs\": [\r\n\t\t\t\t\"system.ram\"\r\n\t\t\t],\r\n\t\t\t\"hops\": 1,\r\n\t\t\t\"coverage\": {\r\n\t\t\t\t\"score\": 1,\r\n\t\t\t\t\"issues\": [\r\n\t\t\t\t]\r\n\t\t\t}\r\n\t\t},{\r\n\t\t\t\"id\": \"6b99aad5-df7f-456f-b2af-cbaf1693f3e3\",\r\n\t\t\t\"latency\": \"301.369492ms\",\r\n\t\t\t\"chartIDs\": [\r\n\t\t\t\t\"system.ram\"\r\n\t\t\t],\r\n\t\t\t\"hops\": 1,\r\n\t\t\t\"coverage\": {\r\n\t\t\t\t\"score\": 1,\r\n\t\t\t\t\"issues\": [\r\n\t\t\t\t]\r\n\t\t\t}\r\n\t\t},{\r\n\t\t\t\"id\": \"747e73ed-bfa6-4b15-bebc-d5d9342069b6\",\r\n\t\t\t\"latency\": \"346.402573ms\",\r\n\t\t\t\"chartIDs\": [\r\n\t\t\t\t\"system.ram\"\r\n\t\t\t],\r\n\t\t\t\"hops\": 1,\r\n\t\t\t\"coverage\": {\r\n\t\t\t\t\"score\": 1,\r\n\t\t\t\t\"issues\": [\r\n\t\t\t\t]\r\n\t\t\t}\r\n\t\t},{\r\n\t\t\t\"id\": \"ad2a8a1c-22c3-4e6e-b4ee-2550e8a6c060\",\r\n\t\t\t\"latency\": \"372.888153ms\",\r\n\t\t\t\"chartIDs\": [\r\n\t\t\t\t\"system.ram\"\r\n\t\t\t],\r\n\t\t\t\"hops\": 1,\r\n\t\t\t\"coverage\": {\r\n\t\t\t\t\"score\": 1,\r\n\t\t\t\t\"issues\": [\r\n\t\t\t\t]\r\n\t\t\t}\r\n\t\t},{\r\n\t\t\t\"id\": \"bcfa3d5e-befb-4c60-948a-9b1de3d71fa5\",\r\n\t\t\t\"latency\": \"64.907024ms\",\r\n\t\t\t\"chartIDs\": [\r\n\t\t\t\t\"system.ram\"\r\n\t\t\t],\r\n\t\t\t\"hops\": 1,\r\n\t\t\t\"coverage\": {\r\n\t\t\t\t\"score\": 1,\r\n\t\t\t\t\"issues\": [\r\n\t\t\t\t]\r\n\t\t\t}\r\n\t\t},{\r\n\t\t\t\"id\": \"d59508f8-a243-49a5-8bd6-bfecbc6ddb4c\",\r\n\t\t\t\"latency\": \"575.115215ms\",\r\n\t\t\t\"chartIDs\": [\r\n\t\t\t\t\"system.ram\"\r\n\t\t\t],\r\n\t\t\t\"hops\": 1,\r\n\t\t\t\"coverage\": {\r\n\t\t\t\t\"score\": 1,\r\n\t\t\t\t\"issues\": [\r\n\t\t\t\t]\r\n\t\t\t}\r\n\t\t},{\r\n\t\t\t\"id\": \"de74ab82-c404-4a75-bb89-a3dbebad5582\",\r\n\t\t\t\"latency\": \"0s\",\r\n\t\t\t\"error\": {\r\n\t\t\t\t\"errorMsgKey\": \"ErrInternal\",\r\n\t\t\t\t\"errorMessage\": \"Internal Server Error\",\r\n\t\t\t\t\"errorCode\": \"uohY73Xcw4-55377481\"\r\n\t\t\t},\r\n\t\t\t\"chartIDs\": [\r\n\t\t\t],\r\n\t\t\t\"hops\": 0\r\n\t\t},{\r\n\t\t\t\"id\": \"e0fc8a43-5aa5-427a-bac9-6748e4ebd815\",\r\n\t\t\t\"latency\": \"0s\",\r\n\t\t\t\"error\": {\r\n\t\t\t\t\"errorMsgKey\": \"ErrInternal\",\r\n\t\t\t\t\"errorMessage\": \"Internal Server Error\",\r\n\t\t\t\t\"errorCode\": \"uohY73Xcw4-55377481\"\r\n\t\t\t},\r\n\t\t\t\"chartIDs\": [\r\n\t\t\t],\r\n\t\t\t\"hops\": 0\r\n\t\t},{\r\n\t\t\t\"id\": \"f36ba8a5-3f10-4a74-936c-fac35c4b992e\",\r\n\t\t\t\"latency\": \"0s\",\r\n\t\t\t\"error\": {\r\n\t\t\t\t\"errorMsgKey\": \"ErrInternal\",\r\n\t\t\t\t\"errorMessage\": \"Internal Server Error\",\r\n\t\t\t\t\"errorCode\": \"uohY73Xcw4-55377481\"\r\n\t\t\t},\r\n\t\t\t\"chartIDs\": [\r\n\t\t\t],\r\n\t\t\t\"hops\": 0\r\n\t\t},{\r\n\t\t\t\"id\": \"f44405a5-3b16-412c-bca2-2dbacb300370\",\r\n\t\t\t\"latency\": \"0s\",\r\n\t\t\t\"error\": {\r\n\t\t\t\t\"errorMsgKey\": \"ErrInternal\",\r\n\t\t\t\t\"errorMessage\": \"Internal Server Error\",\r\n\t\t\t\t\"errorCode\": \"uohY73Xcw4-55377481\"\r\n\t\t\t},\r\n\t\t\t\"chartIDs\": [\r\n\t\t\t],\r\n\t\t\t\"hops\": 0\r\n\t\t}\r\n\t],\r\n\t\"keys\": {\r\n\t\t\"dimension\": [\r\n\t\t\t\"buffers\",\r\n\t\t\t\"used\"\r\n\t\t]\r\n\t},\r\n\t\"labels\": {\r\n\t}\r\n}\r\n```\r\n\r\n---\r\n\r\n## Discussion\r\n\r\n`ErrInternal` should be completely eliminated. There is no condition under which such a generic error message should be returned. The error message returned should describe the exact reason the system decided it has an internal server error.\r\n\r\nThis is very important. Without proper coding of the errors, all the effort to triage such conditions falls to the back-end team, while with the proper coding we could immediately know what is wrong and how the front-end should deal with the errors.\r\n\r\n@hugovalente-pm @ralphm @papazach " | 1.0 | [BUG] internal server errors in data requests should be descriptive -
https://app.netdata.cloud/api/v2/spaces/31a2fba1-8ee2-4fa0-9fed-bb437e3fb75c/rooms/9e949183-e2a6-44a5-88df-deb4f3f4271e/data
Request payload:
```json
{
"filter": {
"nodeIDs": [
"19f433b0-a9a8-4e0f-b720-485b4178dd52",
"1a031558-2082-48de-a423-3c3807288982",
"1b48aa88-55c6-4a1a-9fc7-6e72e77abbd6",
"25087e88-fd41-4ffd-a1e3-71a7cfa27e24",
"36eab6e6-f998-4b76-b6f8-8eff0c35b2e1",
"4561d8ae-bbf5-462f-9ab6-af7b53faac6d",
"53a2987e-6edf-4b5f-a043-8c83697f6b5e",
"6b99aad5-df7f-456f-b2af-cbaf1693f3e3",
"747e73ed-bfa6-4b15-bebc-d5d9342069b6",
"ad2a8a1c-22c3-4e6e-b4ee-2550e8a6c060",
"bcfa3d5e-befb-4c60-948a-9b1de3d71fa5",
"d59508f8-a243-49a5-8bd6-bfecbc6ddb4c",
"de74ab82-c404-4a75-bb89-a3dbebad5582",
"e0fc8a43-5aa5-427a-bac9-6748e4ebd815",
"f36ba8a5-3f10-4a74-936c-fac35c4b992e",
"f44405a5-3b16-412c-bca2-2dbacb300370"
],
"context": "system.ram",
"dimensions": [
"used",
"buffers",
"active",
"wired"
]
},
"aggregations": [{
"method": "avg",
"groupBy": [
"dimension"
]
}
],
"agent_options": [
"absolute",
"percentage",
"jsonwrap",
"nonzero",
"flip",
"ms"
],
"points": 13,
"format": "array",
"group": "average",
"gtime": 0,
"after": 1648012980,
"before": 1648018380
}
```
Response:
```json
{
"api": 1,
"id": "system.ram",
"name": "system.ram",
"view_update_every": 415,
"first_entry": 1647605480,
"last_entry": 1648026885,
"after": 1648012726,
"before": 1648018120,
"min": 1.698190554545455,
"max": 16.03799108181818,
"dimension_names": [
"buffers",
"used"
],
"dimension_ids": [
"buffers",
"used"
],
"view_latest_values": [1.6997235363636363,16.03799108181818
],
"dimensions": 2,
"points": 13,
"format": "json",
"result": {
"labels": [
"time",
"buffers",
"used"
],
"data": [[1648013140000,1.6988688363636362,15.689459827272726
],[1648013555000,1.6989949545454544,15.737082318181818
],[1648013970000,1.6983231363636364,15.772068690909093
],[1648014385000,1.698190554545455,15.777803909090908
],[1648014800000,1.6983587454545457,15.787101727272727
],[1648015215000,1.6984849545454546,15.743213390909093
],[1648015630000,1.698705209090909,15.90895988181818
],[1648016045000,1.6990130818181817,15.841612845454547
],[1648016460000,1.699146509090909,15.869238972727276
],[1648016875000,1.6992821636363635,15.785360199999998
],[1648017290000,1.6994099363636364,15.789540336363636
],[1648017705000,1.699541627272727,15.925925345454546
],[1648018120000,1.6997235363636363,16.03799108181818
]
]
},
"nodes": [{
"id": "19f433b0-a9a8-4e0f-b720-485b4178dd52",
"latency": "507.896244ms",
"chartIDs": [
"system.ram"
],
"hops": 1,
"coverage": {
"score": 1,
"issues": [
]
}
},{
"id": "1a031558-2082-48de-a423-3c3807288982",
"latency": "819.013017ms",
"chartIDs": [
"system.ram"
],
"hops": 1,
"coverage": {
"score": 1,
"issues": [
]
}
},{
"id": "1b48aa88-55c6-4a1a-9fc7-6e72e77abbd6",
"latency": "703.186707ms",
"chartIDs": [
"system.ram"
],
"hops": 1,
"coverage": {
"score": 1,
"issues": [
]
}
},{
"id": "25087e88-fd41-4ffd-a1e3-71a7cfa27e24",
"latency": "75.718061ms",
"chartIDs": [
"system.ram"
],
"hops": 1,
"coverage": {
"score": 1,
"issues": [
]
}
},{
"id": "36eab6e6-f998-4b76-b6f8-8eff0c35b2e1",
"latency": "0s",
"error": {
"errorMsgKey": "ErrInternal",
"errorMessage": "Internal Server Error",
"errorCode": "uohY73Xcw4-55377481"
},
"chartIDs": [
],
"hops": 0
},{
"id": "4561d8ae-bbf5-462f-9ab6-af7b53faac6d",
"latency": "38.416387ms",
"chartIDs": [
"system.ram"
],
"hops": 1,
"coverage": {
"score": 1,
"issues": [
]
}
},{
"id": "53a2987e-6edf-4b5f-a043-8c83697f6b5e",
"latency": "39.488439ms",
"chartIDs": [
"system.ram"
],
"hops": 1,
"coverage": {
"score": 1,
"issues": [
]
}
},{
"id": "6b99aad5-df7f-456f-b2af-cbaf1693f3e3",
"latency": "301.369492ms",
"chartIDs": [
"system.ram"
],
"hops": 1,
"coverage": {
"score": 1,
"issues": [
]
}
},{
"id": "747e73ed-bfa6-4b15-bebc-d5d9342069b6",
"latency": "346.402573ms",
"chartIDs": [
"system.ram"
],
"hops": 1,
"coverage": {
"score": 1,
"issues": [
]
}
},{
"id": "ad2a8a1c-22c3-4e6e-b4ee-2550e8a6c060",
"latency": "372.888153ms",
"chartIDs": [
"system.ram"
],
"hops": 1,
"coverage": {
"score": 1,
"issues": [
]
}
},{
"id": "bcfa3d5e-befb-4c60-948a-9b1de3d71fa5",
"latency": "64.907024ms",
"chartIDs": [
"system.ram"
],
"hops": 1,
"coverage": {
"score": 1,
"issues": [
]
}
},{
"id": "d59508f8-a243-49a5-8bd6-bfecbc6ddb4c",
"latency": "575.115215ms",
"chartIDs": [
"system.ram"
],
"hops": 1,
"coverage": {
"score": 1,
"issues": [
]
}
},{
"id": "de74ab82-c404-4a75-bb89-a3dbebad5582",
"latency": "0s",
"error": {
"errorMsgKey": "ErrInternal",
"errorMessage": "Internal Server Error",
"errorCode": "uohY73Xcw4-55377481"
},
"chartIDs": [
],
"hops": 0
},{
"id": "e0fc8a43-5aa5-427a-bac9-6748e4ebd815",
"latency": "0s",
"error": {
"errorMsgKey": "ErrInternal",
"errorMessage": "Internal Server Error",
"errorCode": "uohY73Xcw4-55377481"
},
"chartIDs": [
],
"hops": 0
},{
"id": "f36ba8a5-3f10-4a74-936c-fac35c4b992e",
"latency": "0s",
"error": {
"errorMsgKey": "ErrInternal",
"errorMessage": "Internal Server Error",
"errorCode": "uohY73Xcw4-55377481"
},
"chartIDs": [
],
"hops": 0
},{
"id": "f44405a5-3b16-412c-bca2-2dbacb300370",
"latency": "0s",
"error": {
"errorMsgKey": "ErrInternal",
"errorMessage": "Internal Server Error",
"errorCode": "uohY73Xcw4-55377481"
},
"chartIDs": [
],
"hops": 0
}
],
"keys": {
"dimension": [
"buffers",
"used"
]
},
"labels": {
}
}
```
---
## Discussion
`ErrInternal` should be completely eliminated. There is no condition under which such a generic error message should be returned. The error message returned should describe the exact reason the system decided it has an internal server error.
This is very important. Without proper coding of the errors, all the effort to triage such conditions falls to the back-end team, while with the proper coding we could immediately know what is wrong and how the front-end should deal with the errors.
@hugovalente-pm @ralphm @papazach | priority | internal server errors in data requests should be descriptive request payload json filter nodeids bebc befb context system ram dimensions used buffers active wired aggregations method avg groupby dimension agent options absolute percentage jsonwrap nonzero flip ms points format array group average gtime after before response json api id system ram name system ram view update every first entry last entry after before min max dimension names buffers used dimension ids buffers used view latest values dimensions points format json result labels time buffers used data nodes id latency chartids system ram hops coverage score issues id latency chartids system ram hops coverage score issues id latency chartids system ram hops coverage score issues id latency chartids system ram hops coverage score issues id latency error errormsgkey errinternal errormessage internal server error errorcode chartids hops id latency chartids system ram hops coverage score issues id latency chartids system ram hops coverage score issues id latency chartids system ram hops coverage score issues id bebc latency chartids system ram hops coverage score issues id latency chartids system ram hops coverage score issues id befb latency chartids system ram hops coverage score issues id latency chartids system ram hops coverage score issues id latency error errormsgkey errinternal errormessage internal server error errorcode chartids hops id latency error errormsgkey errinternal errormessage internal server error errorcode chartids hops id latency error errormsgkey errinternal errormessage internal server error errorcode chartids hops id latency error errormsgkey errinternal errormessage internal server error errorcode chartids hops keys dimension buffers used labels discussion errinternal should be completely eliminated there is no condition under which such a generic error message should be returned the error message returned should describe the exact reason the system decided it has an internal server error this is very important without proper coding of the errors all the effort to triage such conditions falls to the back end team while with the proper coding we could immediately know what is wrong and how the front end should deal with the errors hugovalente pm ralphm papazach | 1 |
115,534 | 4,675,615,571 | IssuesEvent | 2016-10-07 08:35:42 | CS2103AUG2016-F11-C4/main | https://api.github.com/repos/CS2103AUG2016-F11-C4/main | opened | As a new user I can view more information about a particular command | priority.medium | ... so that I can learn how to use various commands | 1.0 | As a new user I can view more information about a particular command - ... so that I can learn how to use various commands | priority | as a new user i can view more information about a particular command so that i can learn how to use various commands | 1 |
366,949 | 10,832,398,145 | IssuesEvent | 2019-11-11 10:30:50 | AY1920S1-CS2103-F10-2/main | https://api.github.com/repos/AY1920S1-CS2103-F10-2/main | closed | Adding tags to templates | priority.Medium | Allows for better categorization of templates especially when the user chooses to use it as a template of recipes.
Can be used in sorting of templates | 1.0 | Adding tags to templates - Allows for better categorization of templates especially when the user chooses to use it as a template of recipes.
Can be used in sorting of templates | priority | adding tags to templates allows for better categorization of templates especially when the user chooses to use it as a template of recipes can be used in sorting of templates | 1 |
174,274 | 6,538,670,224 | IssuesEvent | 2017-09-01 07:39:49 | compodoc/compodoc | https://api.github.com/repos/compodoc/compodoc | reopened | [BUG] Coverage of public params in constructor | Priority: Medium Status: Completed Time: ~1 hour Type: Bug | ##### **Overview of the issue**
I have simple component
```typescript
'use strict';
import { Component } from '@angular/core';
import { NgbActiveModal } from '@ng-bootstrap/ng-bootstrap';
/**
* This class represents the terms and conditions component
*/
@Component({
selector: 'al-tos',
styleUrls: ['../../../customisations/sign-up/tos/tos.component.scss'],
templateUrl: '../../../customisations/sign-up/tos/tos.component.html'
})
export class TOSComponent {
/**
* Terms of license constructor
* @param {NgbActiveModal} activeModal Bootstrap service to work with modal instances
*/
public constructor(
public activeModal: NgbActiveModal
) {}
}
```
And when I generate docs, I have **66% (2/3)** coverage. I also tried to add docs to `activeModal`
```typescript
/**
* Terms of license constructor
* @param {NgbActiveModal} activeModal Bootstrap service to work with modal instances
*/
public constructor(
/*
* {NgbActiveModal} activeModal Modal instance
*/
public activeModal: NgbActiveModal
) {}
```
or
```typescript
/**
* Terms of license constructor
* @param {NgbActiveModal} activeModal Bootstrap service to work with modal instances
*/
public constructor(
/*
* Modal instance
*/
public activeModal: NgbActiveModal
) {}
```
But it didn't help. Did I missed something ?
##### **Operating System, Node.js, npm, compodoc version(s)**
- Node.js : 6.9.5
- npm: 3.10.10
- compodoc: 1.0.0.beta-13
##### **Compodoc installed globally or locally ?**
Compodoc installed locally
| 1.0 | [BUG] Coverage of public params in constructor - ##### **Overview of the issue**
I have simple component
```typescript
'use strict';
import { Component } from '@angular/core';
import { NgbActiveModal } from '@ng-bootstrap/ng-bootstrap';
/**
* This class represents the terms and conditions component
*/
@Component({
selector: 'al-tos',
styleUrls: ['../../../customisations/sign-up/tos/tos.component.scss'],
templateUrl: '../../../customisations/sign-up/tos/tos.component.html'
})
export class TOSComponent {
/**
* Terms of license constructor
* @param {NgbActiveModal} activeModal Bootstrap service to work with modal instances
*/
public constructor(
public activeModal: NgbActiveModal
) {}
}
```
And when I generate docs, I have **66% (2/3)** coverage. I also tried to add docs to `activeModal`
```typescript
/**
* Terms of license constructor
* @param {NgbActiveModal} activeModal Bootstrap service to work with modal instances
*/
public constructor(
/*
* {NgbActiveModal} activeModal Modal instance
*/
public activeModal: NgbActiveModal
) {}
```
or
```typescript
/**
* Terms of license constructor
* @param {NgbActiveModal} activeModal Bootstrap service to work with modal instances
*/
public constructor(
/*
* Modal instance
*/
public activeModal: NgbActiveModal
) {}
```
But it didn't help. Did I missed something ?
##### **Operating System, Node.js, npm, compodoc version(s)**
- Node.js : 6.9.5
- npm: 3.10.10
- compodoc: 1.0.0.beta-13
##### **Compodoc installed globally or locally ?**
Compodoc installed locally
| priority | coverage of public params in constructor overview of the issue i have simple component typescript use strict import component from angular core import ngbactivemodal from ng bootstrap ng bootstrap this class represents the terms and conditions component component selector al tos styleurls templateurl customisations sign up tos tos component html export class toscomponent terms of license constructor param ngbactivemodal activemodal bootstrap service to work with modal instances public constructor public activemodal ngbactivemodal and when i generate docs i have coverage i also tried to add docs to activemodal typescript terms of license constructor param ngbactivemodal activemodal bootstrap service to work with modal instances public constructor ngbactivemodal activemodal modal instance public activemodal ngbactivemodal or typescript terms of license constructor param ngbactivemodal activemodal bootstrap service to work with modal instances public constructor modal instance public activemodal ngbactivemodal but it didn t help did i missed something operating system node js npm compodoc version s node js npm compodoc beta compodoc installed globally or locally compodoc installed locally | 1 |
783,163 | 27,520,840,917 | IssuesEvent | 2023-03-06 14:57:14 | cilium/cilium | https://api.github.com/repos/cilium/cilium | reopened | CNI plugin should tolerate a down agent. | kind/bug priority/medium area/cni sig/agent | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### What happened?
While uninstalling Cilium, deletion of existing pods fails since the agent is down. It fails with the message
` failed to destroy network for sandbox \"\": plugin type=\"cilium-cni\" failed (delete): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http:///var/run/cilium/cilium.sock/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?" `
The CNI specification says that, on delete, CNI plugins should be as error-tolerant as possible. So, if at all possible, the CNI binary should be able to handle a deletion without the agent running.
### Cilium Version
v1.13.0-rc2
### Kernel Version
n/a
### Kubernetes Version
n/a
### Sysdump
_No response_
### Relevant log output
_No response_
### Anything else?
Related: https://github.com/cilium/cilium/issues/20723
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct | 1.0 | CNI plugin should tolerate a down agent. - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### What happened?
While uninstalling Cilium, deletion of existing pods fails since the agent is down. It fails with the message
` failed to destroy network for sandbox \"\": plugin type=\"cilium-cni\" failed (delete): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http:///var/run/cilium/cilium.sock/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?" `
The CNI specification says that, on delete, CNI plugins should be as error-tolerant as possible. So, if at all possible, the CNI binary should be able to handle a deletion without the agent running.
### Cilium Version
v1.13.0-rc2
### Kernel Version
n/a
### Kubernetes Version
n/a
### Sysdump
_No response_
### Relevant log output
_No response_
### Anything else?
Related: https://github.com/cilium/cilium/issues/20723
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct | priority | cni plugin should tolerate a down agent is there an existing issue for this i have searched the existing issues what happened while uninstalling cilium deletion of existing pods fails since the agent is down it fails with the message failed to destroy network for sandbox plugin type cilium cni failed delete unable to connect to cilium daemon failed to create cilium agent client after seconds timeout get dial unix var run cilium cilium sock connect no such file or directory nis the agent running the cni specification says that on delete cni plugins should be as error tolerant as possible so if at all possible the cni binary should be able to handle a deletion without the agent running cilium version kernel version n a kubernetes version n a sysdump no response relevant log output no response anything else related code of conduct i agree to follow this project s code of conduct | 1 |
391,817 | 11,578,978,301 | IssuesEvent | 2020-02-21 16:57:08 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | pend() assertion can allow user threads to crash the kernel | bug priority: medium | **Describe the bug**
There's an assertion deep in the kernel scheduling code which, when a thread is being pended on a wait queue, to raise an assertion of the timeout is negative and not equal to 0. For some reason, this is immediately followed by a runtime check to just set timeout to 0 if assertions aren't built in.
```
static void pend(struct k_thread *thread, _wait_q_t *wait_q, s32_t timeout)
{
...
if (timeout != K_FOREVER) {
s32_t ticks;
__ASSERT(timeout >= 0,
"Only non-negative values are accepted.");
if (timeout < 0) {
timeout = 0;
}
ticks = _TICK_ALIGN + k_ms_to_ticks_ceil32(timeout);
z_add_thread_timeout(thread, ticks);
}
}
```
Unfortunately, this assertion is problematic, as any system call with a timeout parameter can cause the kernel to fail this assertion. A user thread can make a call like `k_sem_take(&sem, -2)` and cause a crash in kernel mode. This is particularly bad because callers of `pend()` are either holding some kind of spinlock or have locked interrupts; a kernel crash in a critical section is an unrecoverable condition.
User threads making system calls are *not* allowed to ever cause the kernel to fail an assertion or otherwise trigger a kernel panic due to the risk of kernel corruption beyond the scope of the calling thread. Fatal errors in system calls are only permitted in a controlled fashion using Z_OOPS() in syscall verification functions.
I can conceive of three ways of addressing this:
1. Delete the assertion. We have a runtime correction any way, what's the point?
2. Delete the assertion and modify pend() to propagate a return value all the way up to callers of z_pend_* APIs
3. Add bounds checks to the implementation functions of all system calls that accept a timeout parameter.
**To Reproduce**
```
diff --git a/tests/kernel/semaphore/semaphore/src/main.c b/tests/kernel/semaphore/semaphore/src/main.c
index 29ef44ac80..a2f8825bf5 100644
--- a/tests/kernel/semaphore/semaphore/src/main.c
+++ b/tests/kernel/semaphore/semaphore/src/main.c
@@ -432,7 +432,7 @@ void test_sem_take_timeout_forever(void)
k_sem_reset(&simple_sem);
- ret_value = k_sem_take(&simple_sem, K_FOREVER);
+ ret_value = k_sem_take(&simple_sem, -2);
zassert_true(ret_value == 0, "k_sem_take failed");
k_thread_abort(&sem_tid);
```
| 1.0 | pend() assertion can allow user threads to crash the kernel - **Describe the bug**
There's an assertion deep in the kernel scheduling code which, when a thread is being pended on a wait queue, to raise an assertion of the timeout is negative and not equal to 0. For some reason, this is immediately followed by a runtime check to just set timeout to 0 if assertions aren't built in.
```
static void pend(struct k_thread *thread, _wait_q_t *wait_q, s32_t timeout)
{
...
if (timeout != K_FOREVER) {
s32_t ticks;
__ASSERT(timeout >= 0,
"Only non-negative values are accepted.");
if (timeout < 0) {
timeout = 0;
}
ticks = _TICK_ALIGN + k_ms_to_ticks_ceil32(timeout);
z_add_thread_timeout(thread, ticks);
}
}
```
Unfortunately, this assertion is problematic, as any system call with a timeout parameter can cause the kernel to fail this assertion. A user thread can make a call like `k_sem_take(&sem, -2)` and cause a crash in kernel mode. This is particularly bad because callers of `pend()` are either holding some kind of spinlock or have locked interrupts; a kernel crash in a critical section is an unrecoverable condition.
User threads making system calls are *not* allowed to ever cause the kernel to fail an assertion or otherwise trigger a kernel panic due to the risk of kernel corruption beyond the scope of the calling thread. Fatal errors in system calls are only permitted in a controlled fashion using Z_OOPS() in syscall verification functions.
I can conceive of three ways of addressing this:
1. Delete the assertion. We have a runtime correction any way, what's the point?
2. Delete the assertion and modify pend() to propagate a return value all the way up to callers of z_pend_* APIs
3. Add bounds checks to the implementation functions of all system calls that accept a timeout parameter.
**To Reproduce**
```
diff --git a/tests/kernel/semaphore/semaphore/src/main.c b/tests/kernel/semaphore/semaphore/src/main.c
index 29ef44ac80..a2f8825bf5 100644
--- a/tests/kernel/semaphore/semaphore/src/main.c
+++ b/tests/kernel/semaphore/semaphore/src/main.c
@@ -432,7 +432,7 @@ void test_sem_take_timeout_forever(void)
k_sem_reset(&simple_sem);
- ret_value = k_sem_take(&simple_sem, K_FOREVER);
+ ret_value = k_sem_take(&simple_sem, -2);
zassert_true(ret_value == 0, "k_sem_take failed");
k_thread_abort(&sem_tid);
```
| priority | pend assertion can allow user threads to crash the kernel describe the bug there s an assertion deep in the kernel scheduling code which when a thread is being pended on a wait queue to raise an assertion of the timeout is negative and not equal to for some reason this is immediately followed by a runtime check to just set timeout to if assertions aren t built in static void pend struct k thread thread wait q t wait q t timeout if timeout k forever t ticks assert timeout only non negative values are accepted if timeout timeout ticks tick align k ms to ticks timeout z add thread timeout thread ticks unfortunately this assertion is problematic as any system call with a timeout parameter can cause the kernel to fail this assertion a user thread can make a call like k sem take sem and cause a crash in kernel mode this is particularly bad because callers of pend are either holding some kind of spinlock or have locked interrupts a kernel crash in a critical section is an unrecoverable condition user threads making system calls are not allowed to ever cause the kernel to fail an assertion or otherwise trigger a kernel panic due to the risk of kernel corruption beyond the scope of the calling thread fatal errors in system calls are only permitted in a controlled fashion using z oops in syscall verification functions i can conceive of three ways of addressing this delete the assertion we have a runtime correction any way what s the point delete the assertion and modify pend to propagate a return value all the way up to callers of z pend apis add bounds checks to the implementation functions of all system calls that accept a timeout parameter to reproduce diff git a tests kernel semaphore semaphore src main c b tests kernel semaphore semaphore src main c index a tests kernel semaphore semaphore src main c b tests kernel semaphore semaphore src main c void test sem take timeout forever void k sem reset simple sem ret value k sem take simple sem k forever ret value k sem take simple sem zassert true ret value k sem take failed k thread abort sem tid | 1 |
180,106 | 6,643,627,680 | IssuesEvent | 2017-09-27 12:07:34 | Victoire/WidgetCoverBundle | https://api.github.com/repos/Victoire/WidgetCoverBundle | opened | Impossible to upload a cover image | Priority : Medium Type : Bug | When I wanted to upload a cover image, I can't click on the "Choose" button. The differents contents (type of link for example) pass over one another.
<img width="1280" alt="capture d ecran 2017-09-27 a 14 04 58" src="https://user-images.githubusercontent.com/32325557/30912464-e952d1a6-a38c-11e7-9b25-37f46cb68f54.png">
| 1.0 | Impossible to upload a cover image - When I wanted to upload a cover image, I can't click on the "Choose" button. The differents contents (type of link for example) pass over one another.
<img width="1280" alt="capture d ecran 2017-09-27 a 14 04 58" src="https://user-images.githubusercontent.com/32325557/30912464-e952d1a6-a38c-11e7-9b25-37f46cb68f54.png">
| priority | impossible to upload a cover image when i wanted to upload a cover image i can t click on the choose button the differents contents type of link for example pass over one another img width alt capture d ecran a src | 1 |
216,184 | 7,301,965,700 | IssuesEvent | 2018-02-27 08:00:00 | Motoxpro/WorldCupStatsSite | https://api.github.com/repos/Motoxpro/WorldCupStatsSite | closed | Add Timed Training Overall | Medium Priority Data Issue Medium Priority Feature MySQL | Create a timed training overall by adding up all of the finishes and taking the lowest score. If a rider hasn't done a timed training round, their finish for that round is last place or just the number of riders in the session.
Don't bother breaking ties | 2.0 | Add Timed Training Overall - Create a timed training overall by adding up all of the finishes and taking the lowest score. If a rider hasn't done a timed training round, their finish for that round is last place or just the number of riders in the session.
Don't bother breaking ties | priority | add timed training overall create a timed training overall by adding up all of the finishes and taking the lowest score if a rider hasn t done a timed training round their finish for that round is last place or just the number of riders in the session don t bother breaking ties | 1 |
301,748 | 9,223,542,809 | IssuesEvent | 2019-03-12 03:58:30 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | net: icmpv4: Zephyr drops valid echo request | area: Conformance area: Networking bug priority: medium | Zephyr drops echo request with valid checksum. Zephyr must respond to valid echo request.
[icmpv4-valid-chksum2.pcap.gz](https://github.com/zephyrproject-rtos/zephyr/files/2610165/icmpv4-valid-chksum2.pcap.gz)
[icmpv4-valid-chksum1.pcap.gz](https://github.com/zephyrproject-rtos/zephyr/files/2610166/icmpv4-valid-chksum1.pcap.gz)
| 1.0 | net: icmpv4: Zephyr drops valid echo request - Zephyr drops echo request with valid checksum. Zephyr must respond to valid echo request.
[icmpv4-valid-chksum2.pcap.gz](https://github.com/zephyrproject-rtos/zephyr/files/2610165/icmpv4-valid-chksum2.pcap.gz)
[icmpv4-valid-chksum1.pcap.gz](https://github.com/zephyrproject-rtos/zephyr/files/2610166/icmpv4-valid-chksum1.pcap.gz)
| priority | net zephyr drops valid echo request zephyr drops echo request with valid checksum zephyr must respond to valid echo request | 1 |
443,384 | 12,793,665,996 | IssuesEvent | 2020-07-02 04:46:39 | buddyboss/buddyboss-platform | https://api.github.com/repos/buddyboss/buddyboss-platform | closed | Cannot delete conversation when Notification component is deactivated | Has-PR bug priority: medium status: needs review | **Describe the bug**
Cannot delete conversation when Notification component is deactivated
**To Reproduce**
Steps to reproduce the behavior:
1. Deactivate notification component
2. Try to delete a conversation, notice that it will not be deleted.
**Screencast**
https://drive.google.com/file/d/1KuPuBwPM7Av406m2KrErIHH36c3PMC0R/view
**Support ticket links**
https://secure.helpscout.net/conversation/1205558198/79889
| 1.0 | Cannot delete conversation when Notification component is deactivated - **Describe the bug**
Cannot delete conversation when Notification component is deactivated
**To Reproduce**
Steps to reproduce the behavior:
1. Deactivate notification component
2. Try to delete a conversation, notice that it will not be deleted.
**Screencast**
https://drive.google.com/file/d/1KuPuBwPM7Av406m2KrErIHH36c3PMC0R/view
**Support ticket links**
https://secure.helpscout.net/conversation/1205558198/79889
| priority | cannot delete conversation when notification component is deactivated describe the bug cannot delete conversation when notification component is deactivated to reproduce steps to reproduce the behavior deactivate notification component try to delete a conversation notice that it will not be deleted screencast support ticket links | 1 |
384,251 | 11,386,054,434 | IssuesEvent | 2020-01-29 12:28:26 | NukkitX/Nukkit | https://api.github.com/repos/NukkitX/Nukkit | closed | Speedbug | [Priority] Medium [Status] Unconfirmed [Type] Bug |
### Actual Behavior
<!--- What actually happened -->
I had a problem on my server using Nukkit X last version. Players spawn 9/10 times with a speed effect (not visible on effect menu) very high. Please fix this bug !
### Steps to Reproduce
<!--- Reliable steps which someone can use to reproduce the issue. Please do not create issues for non reproducible bug! -->
I dont know.... sorry :C
| 1.0 | Speedbug -
### Actual Behavior
<!--- What actually happened -->
I had a problem on my server using Nukkit X last version. Players spawn 9/10 times with a speed effect (not visible on effect menu) very high. Please fix this bug !
### Steps to Reproduce
<!--- Reliable steps which someone can use to reproduce the issue. Please do not create issues for non reproducible bug! -->
I dont know.... sorry :C
| priority | speedbug actual behavior i had a problem on my server using nukkit x last version players spawn times with a speed effect not visible on effect menu very high please fix this bug steps to reproduce i dont know sorry c | 1 |
329,621 | 10,022,309,203 | IssuesEvent | 2019-07-16 16:23:10 | strapi/strapi | https://api.github.com/repos/strapi/strapi | closed | Authenticating a user with Google is throwing 500 on the 2nd login | priority: medium status: have to reproduce type: bug 🐛 | <!-- ⚠️ If you do not respect this template your issue will be closed. -->
<!-- =============================================================================== -->
<!-- ⚠️ If you are not using the current Strapi release, you will be asked to update. -->
<!-- Please see the wiki for guides on upgrading to the latest release. -->
<!-- =============================================================================== -->
<!-- ⚠️ Make sure to browse the opened and closed issues before submitting your issue. -->
<!-- ⚠️ Before writing your issue make sure you are using:-->
<!-- Node 10.x.x -->
<!-- npm 6.x.x -->
<!-- The latest version of Strapi. -->
**Informations**
- **Node.js version**: 10.13.0
- **npm version**: 6.4.1
- **Strapi version**: v3.0.0-alpha.14.5
- **Database**: mongodb 3.6.5
- **Operating system**: Win10
**What is the current behavior?**
After the first successful authentication with google, additional attempts to authenticate, at least within the next minutes, fail at redirecting to GET /auth/google/callback as long as the user remains signed in with google in the browser.
Response is 500 at http://localhost:4200/auth/callback/google?error%5Berror%5D=invalid_grant&error%5Berror_description%5D=Malformed%20auth%20code.
Signing off from Google fixes this issue and returns jwt and user response after redirecting to GET /auth/google/callback
**Steps to reproduce the problem**
Authenticate with google
Stay signed in with google
Authenticate again with google
**What is the expected behavior?**
After the first authentication subsequent authentication attempts should also issue a new token.
**Suggested solutions**
I'm not quite sure why this happens since the initial redirect from accounts.google.com/o/oauth2/auth to /connect/google/callback looks pretty much the same on the first and subsequent calls, only difference i could recognize was that the first redirect is encoded while the next ones are not
First attempt, working:
http://localhost:1337/connect/google/callback?code=4%2FogD_bJ10kjNs7l8gqTF25hLsYQPUU-rkatS6jK5shBXcw-lLN0wlJpbSDbyx8zFP2yuDyVLDA1ScgSxLaZxxxxx&scope=openid+email+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fplus.me
Next attempt, failing:
http://localhost:1337/connect/google/callback?code=4/ogCWJEaBayzfL2PoY4NJ3rL-11Wu4FNjrPOxRlFG003qPuGCy7nV6qCnlX1vvh_Dt1cUmJZrFn5ESxxxxx&scope=openid+email+https://www.googleapis.com/auth/plus.me+https://www.googleapis.com/auth/userinfo.email | 1.0 | Authenticating a user with Google is throwing 500 on the 2nd login - <!-- ⚠️ If you do not respect this template your issue will be closed. -->
<!-- =============================================================================== -->
<!-- ⚠️ If you are not using the current Strapi release, you will be asked to update. -->
<!-- Please see the wiki for guides on upgrading to the latest release. -->
<!-- =============================================================================== -->
<!-- ⚠️ Make sure to browse the opened and closed issues before submitting your issue. -->
<!-- ⚠️ Before writing your issue make sure you are using:-->
<!-- Node 10.x.x -->
<!-- npm 6.x.x -->
<!-- The latest version of Strapi. -->
**Informations**
- **Node.js version**: 10.13.0
- **npm version**: 6.4.1
- **Strapi version**: v3.0.0-alpha.14.5
- **Database**: mongodb 3.6.5
- **Operating system**: Win10
**What is the current behavior?**
After the first successful authentication with google, additional attempts to authenticate, at least within the next minutes, fail at redirecting to GET /auth/google/callback as long as the user remains signed in with google in the browser.
Response is 500 at http://localhost:4200/auth/callback/google?error%5Berror%5D=invalid_grant&error%5Berror_description%5D=Malformed%20auth%20code.
Signing off from Google fixes this issue and returns jwt and user response after redirecting to GET /auth/google/callback
**Steps to reproduce the problem**
Authenticate with google
Stay signed in with google
Authenticate again with google
**What is the expected behavior?**
After the first authentication subsequent authentication attempts should also issue a new token.
**Suggested solutions**
I'm not quite sure why this happens since the initial redirect from accounts.google.com/o/oauth2/auth to /connect/google/callback looks pretty much the same on the first and subsequent calls, only difference i could recognize was that the first redirect is encoded while the next ones are not
First attempt, working:
http://localhost:1337/connect/google/callback?code=4%2FogD_bJ10kjNs7l8gqTF25hLsYQPUU-rkatS6jK5shBXcw-lLN0wlJpbSDbyx8zFP2yuDyVLDA1ScgSxLaZxxxxx&scope=openid+email+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fplus.me
Next attempt, failing:
http://localhost:1337/connect/google/callback?code=4/ogCWJEaBayzfL2PoY4NJ3rL-11Wu4FNjrPOxRlFG003qPuGCy7nV6qCnlX1vvh_Dt1cUmJZrFn5ESxxxxx&scope=openid+email+https://www.googleapis.com/auth/plus.me+https://www.googleapis.com/auth/userinfo.email | priority | authenticating a user with google is throwing on the login informations node js version npm version strapi version alpha database mongodb operating system what is the current behavior after the first successful authentication with google additional attempts to authenticate at least within the next minutes fail at redirecting to get auth google callback as long as the user remains signed in with google in the browser response is at signing off from google fixes this issue and returns jwt and user response after redirecting to get auth google callback steps to reproduce the problem authenticate with google stay signed in with google authenticate again with google what is the expected behavior after the first authentication subsequent authentication attempts should also issue a new token suggested solutions i m not quite sure why this happens since the initial redirect from accounts google com o auth to connect google callback looks pretty much the same on the first and subsequent calls only difference i could recognize was that the first redirect is encoded while the next ones are not first attempt working next attempt failing | 1 |
271,769 | 8,489,111,978 | IssuesEvent | 2018-10-26 18:48:00 | wevote/WebApp | https://api.github.com/repos/wevote/WebApp | closed | Race Level & Decided State: Save in Store | Difficulty: Medium Priority: 1 | ### Please describe the issue (What happens? What do you expect?)
When a voter dives in to look at a candidate or measure from the Ballot page, and then comes back to the ballot, the "Race Level" (Federal, State, Measure, Local), and the "Decided State" (Remaining Decisions, All Items, Items Decided) both get unset. Store these "states" in a Store so they remain the same when the voter returns to the Ballot.
| 1.0 | Race Level & Decided State: Save in Store - ### Please describe the issue (What happens? What do you expect?)
When a voter dives in to look at a candidate or measure from the Ballot page, and then comes back to the ballot, the "Race Level" (Federal, State, Measure, Local), and the "Decided State" (Remaining Decisions, All Items, Items Decided) both get unset. Store these "states" in a Store so they remain the same when the voter returns to the Ballot.
| priority | race level decided state save in store please describe the issue what happens what do you expect when a voter dives in to look at a candidate or measure from the ballot page and then comes back to the ballot the race level federal state measure local and the decided state remaining decisions all items items decided both get unset store these states in a store so they remain the same when the voter returns to the ballot | 1 |
108,028 | 4,325,558,759 | IssuesEvent | 2016-07-26 00:28:34 | syb0rg/Khronos | https://api.github.com/repos/syb0rg/Khronos | closed | CMake Rebuild Problems | Priority: Medium Status: Completed Status: Pending Type: Bug | Upon rebuilding, CMake sometimes fails to download external dependencies. Deleting the project's folder in the `libs` solves the issue. | 1.0 | CMake Rebuild Problems - Upon rebuilding, CMake sometimes fails to download external dependencies. Deleting the project's folder in the `libs` solves the issue. | priority | cmake rebuild problems upon rebuilding cmake sometimes fails to download external dependencies deleting the project s folder in the libs solves the issue | 1 |
366,310 | 10,819,566,544 | IssuesEvent | 2019-11-08 14:39:30 | raz0red/wii-mednafen | https://api.github.com/repos/raz0red/wii-mednafen | closed | NTFS Support | Priority-Medium Type-Enhancement auto-migrated | ```
Please add Support to NTFS Hard Disk!
```
Original issue reported on code.google.com by `[email protected]` on 3 Mar 2012 at 9:41
| 1.0 | NTFS Support - ```
Please add Support to NTFS Hard Disk!
```
Original issue reported on code.google.com by `[email protected]` on 3 Mar 2012 at 9:41
| priority | ntfs support please add support to ntfs hard disk original issue reported on code google com by cardelli gmail com on mar at | 1 |
517,703 | 15,018,708,058 | IssuesEvent | 2021-02-01 12:36:14 | buidl-labs/crypto-code-school-inside-tezos | https://api.github.com/repos/buidl-labs/crypto-code-school-inside-tezos | opened | Auth flow | Priority: Medium Type: Enhancement | Edge case that needs to be handled:
1. If the wallet is Uninstall after log in. Don't allow users to login.
2. Update copy for creating account in case where If no account is found. | 1.0 | Auth flow - Edge case that needs to be handled:
1. If the wallet is Uninstall after log in. Don't allow users to login.
2. Update copy for creating account in case where If no account is found. | priority | auth flow edge case that needs to be handled if the wallet is uninstall after log in don t allow users to login update copy for creating account in case where if no account is found | 1 |
44,066 | 2,899,099,610 | IssuesEvent | 2015-06-17 09:12:43 | greenlion/PHP-SQL-Parser | https://api.github.com/repos/greenlion/PHP-SQL-Parser | closed | 1. please turn off debug print_r() 2. probbaly, some bug in parser triggers it | bug imported Priority-Medium | _From [[email protected]](https://code.google.com/u/113326933444776031366/) on June 19, 2011 15:31:59_
WHAT STEPS WILL REPRODUCE THE PROBLEM?
1. $sql = "SELECT SQL_CALC_FOUND_ROWS SmTable.*, MATCH (SmTable.fulltextsearch_keyword) AGAINST ('google googles') AS keyword_score FROM SmTable WHERE SmTable.status = 'A' AND (SmTable.country_id = 1 AND SmTable.state_id = 10) AND MATCH (SmTable.fulltextsearch_keyword) AGAINST ('google googles') ORDER BY SmTable.level DESC, keyword_score DESC LIMIT 0,10"
2. $parser = new PHPSQLParser($sql);
WHAT IS THE EXPECTED OUTPUT? WHAT DO YOU SEE INSTEAD?
No output expected.
Instead, it hits line 1242 and prints smth:
if(!is_array($processed)) {
print_r($processed); // 1242
$processed = false;
} What version of the product are you using? On what operating system? http://php-sql-parser.googlecode.com/svn/trunk/php-sql-parser.php uname -a
Linux ********* `#1` SMP Tue Sep 1 10:25:30 EDT 2009 x86_64 GNU/Linux
_Original issue: http://code.google.com/p/php-sql-parser/issues/detail?id=12_ | 1.0 | 1. please turn off debug print_r() 2. probbaly, some bug in parser triggers it - _From [[email protected]](https://code.google.com/u/113326933444776031366/) on June 19, 2011 15:31:59_
WHAT STEPS WILL REPRODUCE THE PROBLEM?
1. $sql = "SELECT SQL_CALC_FOUND_ROWS SmTable.*, MATCH (SmTable.fulltextsearch_keyword) AGAINST ('google googles') AS keyword_score FROM SmTable WHERE SmTable.status = 'A' AND (SmTable.country_id = 1 AND SmTable.state_id = 10) AND MATCH (SmTable.fulltextsearch_keyword) AGAINST ('google googles') ORDER BY SmTable.level DESC, keyword_score DESC LIMIT 0,10"
2. $parser = new PHPSQLParser($sql);
WHAT IS THE EXPECTED OUTPUT? WHAT DO YOU SEE INSTEAD?
No output expected.
Instead, it hits line 1242 and prints smth:
if(!is_array($processed)) {
print_r($processed); // 1242
$processed = false;
} What version of the product are you using? On what operating system? http://php-sql-parser.googlecode.com/svn/trunk/php-sql-parser.php uname -a
Linux ********* `#1` SMP Tue Sep 1 10:25:30 EDT 2009 x86_64 GNU/Linux
_Original issue: http://code.google.com/p/php-sql-parser/issues/detail?id=12_ | priority | please turn off debug print r probbaly some bug in parser triggers it from on june what steps will reproduce the problem sql select sql calc found rows smtable match smtable fulltextsearch keyword against google googles as keyword score from smtable where smtable status a and smtable country id and smtable state id and match smtable fulltextsearch keyword against google googles order by smtable level desc keyword score desc limit parser new phpsqlparser sql what is the expected output what do you see instead no output expected instead it hits line and prints smth if is array processed print r processed processed false what version of the product are you using on what operating system uname a linux smp tue sep edt gnu linux original issue | 1 |
705,042 | 24,219,305,358 | IssuesEvent | 2022-09-26 09:29:53 | Co-Laon/claon-server | https://api.github.com/repos/Co-Laon/claon-server | closed | 검색 화면 구현 | enhancement priority: medium | ## Describe
검색어와 유사한 이용자와 암장을 검색 허용
## (Optional) Solution
Please describe your preferred solution
-
| 1.0 | 검색 화면 구현 - ## Describe
검색어와 유사한 이용자와 암장을 검색 허용
## (Optional) Solution
Please describe your preferred solution
-
| priority | 검색 화면 구현 describe 검색어와 유사한 이용자와 암장을 검색 허용 optional solution please describe your preferred solution | 1 |
48,911 | 3,000,833,308 | IssuesEvent | 2015-07-24 06:34:15 | jayway/powermock | https://api.github.com/repos/jayway/powermock | closed | SAX2 parsing - com.sun.org.apache.xerces.internal.parsers.SAXParser cannot be cast to org.xml.sax.XMLReader | bug imported invalid Priority-Medium | _From [[email protected]](https://code.google.com/u/111621038469822880271/) on July 28, 2010 08:58:48_
Hi all,
First of all, my problem is probably similar to the following issue: http://groups.google.com/group/powermock/browse_thread/thread/88079512f2dfcbd1/59b5d831e3e7f9b8?pli=1 basically, I have a hibernate configuration that gets loaded in my test.
I'm getting the following error:
\--------------------------------------------------
java.lang.ClassCastException: com.sun.org.apache.xerces.internal.parsers.SAXParser cannot be cast to org.xml.sax.XMLReader
at org.xml.sax.helpers.XMLReaderFactory.loadClass(XMLReaderFactory.java:199)
at org.xml.sax.helpers.XMLReaderFactory.createXMLReader(XMLReaderFactory.java:150)
at org.dom4j.io.SAXHelper.createXMLReader(SAXHelper.java:83)
at org.dom4j.io.SAXReader.createXMLReader(SAXReader.java:894)
at org.dom4j.io.SAXReader.getXMLReader(SAXReader.java:715)
at org.dom4j.io.SAXReader.read(SAXReader.java:435)
org.hibernate.HibernateException: Could not parse configuration: inMemory-hibernate.cfg.xml
at org.hibernate.cfg.Configuration.doConfigure(Configuration.java:1528)
at org.hibernate.cfg.AnnotationConfiguration.doConfigure(AnnotationConfiguration.java:1035)
at org.hibernate.cfg.AnnotationConfiguration.doConfigure(AnnotationConfiguration.java:64)
at org.hibernate.cfg.Configuration.configure(Configuration.java:1462)
at org.hibernate.cfg.AnnotationConfiguration.configure(AnnotationConfiguration.java:1017)
...
..
Caused by: org.dom4j.DocumentException: SAX2 driver class com.sun.org.apache.xerces.internal.parsers.SAXParser does not implement XMLReader Nested exception: SAX2 driver class com.sun.org.apache.xerces.internal.parsers.SAXParser does not implement XMLReader
at org.dom4j.io.SAXReader.read(SAXReader.java:484)
at org.hibernate.cfg.Configuration.doConfigure(Configuration.java:1518)
... 34 more
at org.hibernate.cfg.Configuration.doConfigure(Configuration.java:1518)
Warning: Caught exception attempting to use SAX to load a SAX XMLReader
at org.hibernate.cfg.AnnotationConfiguration.doConfigure(AnnotationConfiguration.java:1035)
at org.hibernate.cfg.AnnotationConfiguration.doConfigure(AnnotationConfiguration.java:64)
Warning: Exception was: java.lang.ClassCastException: com.sun.org.apache.xerces.internal.parsers.SAXParser cannot be cast to org.xml.sax.XMLReader
at org.hibernate.cfg.Configuration.configure(Configuration.java:1462)
Warning: I will print the stack trace then carry on using the default SAX parser
at org.hibernate.cfg.AnnotationConfiguration.configure(AnnotationConfiguration.java:1017)
...
..
\--------------------------------------
I've tried the recommendation suggested in the issue I mentioned above, without any luck:
\----------------------------------
@RunWith(PowerMockRunner.class)
@PrepareForTest({MyServiceImpl.class})
@PowerMockIgnore( { "com.sun.org.apache.xerces.*", "org.dom4j.*", "org.xml.sax.*" })
public class MyTest extends InMemoryLookupTests{
private MyServiceImpl service;
...
@Before
public void setup() throws Exception {
MockitoAnnotations.initMocks(this);
service = new MyServiceImpl();
....
}
.....
}
\------------------------------------
Could someone please share some light?
Thanks in advance!
Regards,
Alex.
_Original issue: http://code.google.com/p/powermock/issues/detail?id=270_ | 1.0 | SAX2 parsing - com.sun.org.apache.xerces.internal.parsers.SAXParser cannot be cast to org.xml.sax.XMLReader - _From [[email protected]](https://code.google.com/u/111621038469822880271/) on July 28, 2010 08:58:48_
Hi all,
First of all, my problem is probably similar to the following issue: http://groups.google.com/group/powermock/browse_thread/thread/88079512f2dfcbd1/59b5d831e3e7f9b8?pli=1 basically, I have a hibernate configuration that gets loaded in my test.
I'm getting the following error:
\--------------------------------------------------
java.lang.ClassCastException: com.sun.org.apache.xerces.internal.parsers.SAXParser cannot be cast to org.xml.sax.XMLReader
at org.xml.sax.helpers.XMLReaderFactory.loadClass(XMLReaderFactory.java:199)
at org.xml.sax.helpers.XMLReaderFactory.createXMLReader(XMLReaderFactory.java:150)
at org.dom4j.io.SAXHelper.createXMLReader(SAXHelper.java:83)
at org.dom4j.io.SAXReader.createXMLReader(SAXReader.java:894)
at org.dom4j.io.SAXReader.getXMLReader(SAXReader.java:715)
at org.dom4j.io.SAXReader.read(SAXReader.java:435)
org.hibernate.HibernateException: Could not parse configuration: inMemory-hibernate.cfg.xml
at org.hibernate.cfg.Configuration.doConfigure(Configuration.java:1528)
at org.hibernate.cfg.AnnotationConfiguration.doConfigure(AnnotationConfiguration.java:1035)
at org.hibernate.cfg.AnnotationConfiguration.doConfigure(AnnotationConfiguration.java:64)
at org.hibernate.cfg.Configuration.configure(Configuration.java:1462)
at org.hibernate.cfg.AnnotationConfiguration.configure(AnnotationConfiguration.java:1017)
...
..
Caused by: org.dom4j.DocumentException: SAX2 driver class com.sun.org.apache.xerces.internal.parsers.SAXParser does not implement XMLReader Nested exception: SAX2 driver class com.sun.org.apache.xerces.internal.parsers.SAXParser does not implement XMLReader
at org.dom4j.io.SAXReader.read(SAXReader.java:484)
at org.hibernate.cfg.Configuration.doConfigure(Configuration.java:1518)
... 34 more
at org.hibernate.cfg.Configuration.doConfigure(Configuration.java:1518)
Warning: Caught exception attempting to use SAX to load a SAX XMLReader
at org.hibernate.cfg.AnnotationConfiguration.doConfigure(AnnotationConfiguration.java:1035)
at org.hibernate.cfg.AnnotationConfiguration.doConfigure(AnnotationConfiguration.java:64)
Warning: Exception was: java.lang.ClassCastException: com.sun.org.apache.xerces.internal.parsers.SAXParser cannot be cast to org.xml.sax.XMLReader
at org.hibernate.cfg.Configuration.configure(Configuration.java:1462)
Warning: I will print the stack trace then carry on using the default SAX parser
at org.hibernate.cfg.AnnotationConfiguration.configure(AnnotationConfiguration.java:1017)
...
..
\--------------------------------------
I've tried the recommendation suggested in the issue I mentioned above, without any luck:
\----------------------------------
@RunWith(PowerMockRunner.class)
@PrepareForTest({MyServiceImpl.class})
@PowerMockIgnore( { "com.sun.org.apache.xerces.*", "org.dom4j.*", "org.xml.sax.*" })
public class MyTest extends InMemoryLookupTests{
private MyServiceImpl service;
...
@Before
public void setup() throws Exception {
MockitoAnnotations.initMocks(this);
service = new MyServiceImpl();
....
}
.....
}
\------------------------------------
Could someone please share some light?
Thanks in advance!
Regards,
Alex.
_Original issue: http://code.google.com/p/powermock/issues/detail?id=270_ | priority | parsing com sun org apache xerces internal parsers saxparser cannot be cast to org xml sax xmlreader from on july hi all first of all my problem is probably similar to the following issue basically i have a hibernate configuration that gets loaded in my test i m getting the following error java lang classcastexception com sun org apache xerces internal parsers saxparser cannot be cast to org xml sax xmlreader at org xml sax helpers xmlreaderfactory loadclass xmlreaderfactory java at org xml sax helpers xmlreaderfactory createxmlreader xmlreaderfactory java at org io saxhelper createxmlreader saxhelper java at org io saxreader createxmlreader saxreader java at org io saxreader getxmlreader saxreader java at org io saxreader read saxreader java org hibernate hibernateexception could not parse configuration inmemory hibernate cfg xml at org hibernate cfg configuration doconfigure configuration java at org hibernate cfg annotationconfiguration doconfigure annotationconfiguration java at org hibernate cfg annotationconfiguration doconfigure annotationconfiguration java at org hibernate cfg configuration configure configuration java at org hibernate cfg annotationconfiguration configure annotationconfiguration java caused by org documentexception driver class com sun org apache xerces internal parsers saxparser does not implement xmlreader nested exception driver class com sun org apache xerces internal parsers saxparser does not implement xmlreader at org io saxreader read saxreader java at org hibernate cfg configuration doconfigure configuration java more at org hibernate cfg configuration doconfigure configuration java warning caught exception attempting to use sax to load a sax xmlreader at org hibernate cfg annotationconfiguration doconfigure annotationconfiguration java at org hibernate cfg annotationconfiguration doconfigure annotationconfiguration java warning exception was java lang classcastexception com sun org apache xerces internal parsers saxparser cannot be cast to org xml sax xmlreader at org hibernate cfg configuration configure configuration java warning i will print the stack trace then carry on using the default sax parser at org hibernate cfg annotationconfiguration configure annotationconfiguration java i ve tried the recommendation suggested in the issue i mentioned above without any luck runwith powermockrunner class preparefortest myserviceimpl class powermockignore com sun org apache xerces org org xml sax public class mytest extends inmemorylookuptests private myserviceimpl service before public void setup throws exception mockitoannotations initmocks this service new myserviceimpl could someone please share some light thanks in advance regards alex original issue | 1 |
720,104 | 24,779,394,833 | IssuesEvent | 2022-10-24 02:26:42 | mito-ds/monorepo | https://api.github.com/repos/mito-ds/monorepo | closed | No way to sign up for Mito Pro after signed in | type: mitosheet waiting on: mockups effort: 5 priority: medium | Currently, there is no way to sign up for Mito Pro after you finish the signing process, without using the installer. We should make it easy to sign up for Mito Pro.
Probably the easiest solution is a simple modal with an input that allows you to signup for Pro; that's it! | 1.0 | No way to sign up for Mito Pro after signed in - Currently, there is no way to sign up for Mito Pro after you finish the signing process, without using the installer. We should make it easy to sign up for Mito Pro.
Probably the easiest solution is a simple modal with an input that allows you to signup for Pro; that's it! | priority | no way to sign up for mito pro after signed in currently there is no way to sign up for mito pro after you finish the signing process without using the installer we should make it easy to sign up for mito pro probably the easiest solution is a simple modal with an input that allows you to signup for pro that s it | 1 |
331,745 | 10,076,586,839 | IssuesEvent | 2019-07-24 16:35:25 | svof/svof | https://api.github.com/repos/svof/svof | closed | Error during Serverside Priority Sync | bug confirmed in-client medium priority | In order to reliable create this error: Turnoff Serverside curing and force a classchange.
![image](https://user-images.githubusercontent.com/14912622/61603911-85272c00-ac0d-11e9-83a3-71f288485fec.png)
I believe the error is caused when sk.notifypriodiffs() runs into these when comparing differences.
![image](https://user-images.githubusercontent.com/14912622/61603760-d125a100-ac0c-11e9-929a-0d60e3c2c113.png)
| 1.0 | Error during Serverside Priority Sync - In order to reliable create this error: Turnoff Serverside curing and force a classchange.
![image](https://user-images.githubusercontent.com/14912622/61603911-85272c00-ac0d-11e9-83a3-71f288485fec.png)
I believe the error is caused when sk.notifypriodiffs() runs into these when comparing differences.
![image](https://user-images.githubusercontent.com/14912622/61603760-d125a100-ac0c-11e9-929a-0d60e3c2c113.png)
| priority | error during serverside priority sync in order to reliable create this error turnoff serverside curing and force a classchange i believe the error is caused when sk notifypriodiffs runs into these when comparing differences | 1 |
378,850 | 11,209,806,852 | IssuesEvent | 2020-01-06 11:27:36 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | Minimap icons are 1 frame older than minimap | Fixed Medium Priority | When you drag quicly the minimap you can see that icons loose their positions in the minimap
![ezgif-2-908ba9238d27](https://user-images.githubusercontent.com/53317567/70945931-f7713080-2034-11ea-9d0f-dee37769bc0d.gif)
| 1.0 | Minimap icons are 1 frame older than minimap - When you drag quicly the minimap you can see that icons loose their positions in the minimap
![ezgif-2-908ba9238d27](https://user-images.githubusercontent.com/53317567/70945931-f7713080-2034-11ea-9d0f-dee37769bc0d.gif)
| priority | minimap icons are frame older than minimap when you drag quicly the minimap you can see that icons loose their positions in the minimap | 1 |
25,454 | 2,683,804,348 | IssuesEvent | 2015-03-28 10:21:14 | ConEmu/old-issues | https://api.github.com/repos/ConEmu/old-issues | closed | popup меню не получает фокус | 2–5 stars bug imported Priority-Medium | _From [andrew.grechkin](https://code.google.com/u/andrew.grechkin/) on June 24, 2009 06:07:54_
Версия ОС: XP SP3
Версия FAR: Far Manager, version 2.0 (build 1003) x86 Описание бага... В одном из билдов ConEmu стало вот что:
У меня по клавише Apps вызывается emenu графическое popup меню.
Раньше все работало как надо - после нажатия Apps появлялось меню и
получало фокус, теперь же фокус оно не получает пока не тыкнешь его мышой.
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=20_ | 1.0 | popup меню не получает фокус - _From [andrew.grechkin](https://code.google.com/u/andrew.grechkin/) on June 24, 2009 06:07:54_
Версия ОС: XP SP3
Версия FAR: Far Manager, version 2.0 (build 1003) x86 Описание бага... В одном из билдов ConEmu стало вот что:
У меня по клавише Apps вызывается emenu графическое popup меню.
Раньше все работало как надо - после нажатия Apps появлялось меню и
получало фокус, теперь же фокус оно не получает пока не тыкнешь его мышой.
_Original issue: http://code.google.com/p/conemu-maximus5/issues/detail?id=20_ | priority | popup меню не получает фокус from on june версия ос xp версия far far manager version build описание бага в одном из билдов conemu стало вот что у меня по клавише apps вызывается emenu графическое popup меню раньше все работало как надо после нажатия apps появлялось меню и получало фокус теперь же фокус оно не получает пока не тыкнешь его мышой original issue | 1 |
708,147 | 24,331,869,921 | IssuesEvent | 2022-09-30 20:13:57 | UMKC-MCOM/framework | https://api.github.com/repos/UMKC-MCOM/framework | closed | Landing Page: Photo with Buttons | Priority: Medium | @Working within the updated [XD design document](https://xd.adobe.com/view/41efea3b-2d2d-488d-8381-36dfbfc795c1-14bc/), create the Photo with Buttons (allowing for background + text color variations). | 1.0 | Landing Page: Photo with Buttons - @Working within the updated [XD design document](https://xd.adobe.com/view/41efea3b-2d2d-488d-8381-36dfbfc795c1-14bc/), create the Photo with Buttons (allowing for background + text color variations). | priority | landing page photo with buttons working within the updated create the photo with buttons allowing for background text color variations | 1 |
516,647 | 14,985,632,466 | IssuesEvent | 2021-01-28 20:12:25 | emulsify-ds/emulsify-drupal | https://api.github.com/repos/emulsify-ds/emulsify-drupal | closed | WSOD caused by pager.twig in multilingual views | Priority: Medium Status: Review Needed Type: Bug released | emulsify version: 8.x-1.0-beta4
node version: v11.15.0
yarn version: 1.22.4
**What you did:**
In a view, switched to full pager on a multilingual website
**What happened:**
WSOD, with the following message in log:
**Problem description:**
InvalidArgumentException: $string ("") must be a string. in Drupal\Core\StringTranslation\TranslatableMarkup->__construct() (line 132 of ../web/core/lib/Drupal/Core/StringTranslation/TranslatableMarkup.php).
**Suggested solution:**
For my use, I will compare or replace Emulsify 'pager.twig' file with Drupal Stable one (which works). If finding more signifiant information, I will add it below.
┆Issue is synchronized with this [Jira Story](https://fourkitchens.atlassian.net/browse/GS-192) by [Unito](https://www.unito.io/learn-more)
| 1.0 | WSOD caused by pager.twig in multilingual views - emulsify version: 8.x-1.0-beta4
node version: v11.15.0
yarn version: 1.22.4
**What you did:**
In a view, switched to full pager on a multilingual website
**What happened:**
WSOD, with the following message in log:
**Problem description:**
InvalidArgumentException: $string ("") must be a string. in Drupal\Core\StringTranslation\TranslatableMarkup->__construct() (line 132 of ../web/core/lib/Drupal/Core/StringTranslation/TranslatableMarkup.php).
**Suggested solution:**
For my use, I will compare or replace Emulsify 'pager.twig' file with Drupal Stable one (which works). If finding more signifiant information, I will add it below.
┆Issue is synchronized with this [Jira Story](https://fourkitchens.atlassian.net/browse/GS-192) by [Unito](https://www.unito.io/learn-more)
| priority | wsod caused by pager twig in multilingual views emulsify version x node version yarn version what you did in a view switched to full pager on a multilingual website what happened wsod with the following message in log problem description invalidargumentexception string must be a string in drupal core stringtranslation translatablemarkup construct line of web core lib drupal core stringtranslation translatablemarkup php suggested solution for my use i will compare or replace emulsify pager twig file with drupal stable one which works if finding more signifiant information i will add it below ┆issue is synchronized with this by | 1 |
302,994 | 9,301,244,049 | IssuesEvent | 2019-03-23 20:15:21 | robotframework/SeleniumLibrary | https://api.github.com/repos/robotframework/SeleniumLibrary | closed | Fix Press Keys acceptance test to work with Mac | priority: medium task | In Mac, `CTRL` does not function in same way as in Windows or Linux. In mac `COMMAND` key should be used when testing Press Keys keyword. | 1.0 | Fix Press Keys acceptance test to work with Mac - In Mac, `CTRL` does not function in same way as in Windows or Linux. In mac `COMMAND` key should be used when testing Press Keys keyword. | priority | fix press keys acceptance test to work with mac in mac ctrl does not function in same way as in windows or linux in mac command key should be used when testing press keys keyword | 1 |
792,540 | 27,964,664,371 | IssuesEvent | 2023-03-24 18:22:51 | AY2223S2-CS2113-T14-3/tp | https://api.github.com/repos/AY2223S2-CS2113-T14-3/tp | closed | As a user I can view the date of my expenditures in a user-friendly format | type.Enhancement priority.Medium | So that I can have a better idea of my expenses over the time period of my education | 1.0 | As a user I can view the date of my expenditures in a user-friendly format - So that I can have a better idea of my expenses over the time period of my education | priority | as a user i can view the date of my expenditures in a user friendly format so that i can have a better idea of my expenses over the time period of my education | 1 |
404,894 | 11,864,170,562 | IssuesEvent | 2020-03-25 21:07:48 | teamforus/me | https://api.github.com/repos/teamforus/me | closed | Me-ios: Debug functionality | Difficulty: Medium Priority: Could have Scope: Medium | ## Main asssignee: @
Task:
- [ ] add option to set any ip adress
## Context/goal:
add ways to debug easily like:
storing multiple identities access tokens
changing base URL; adding custom local one
![Screenshot_20191216_165646_io forus test_meapp](https://user-images.githubusercontent.com/10818702/70922059-bcb3cc00-2025-11ea-8edf-cb510914175d.jpg)
![Screenshot_20191216_165857_io forus test_meapp](https://user-images.githubusercontent.com/10818702/70922064-bde4f900-2025-11ea-8214-05236e58274c.jpg)
| 1.0 | Me-ios: Debug functionality - ## Main asssignee: @
Task:
- [ ] add option to set any ip adress
## Context/goal:
add ways to debug easily like:
storing multiple identities access tokens
changing base URL; adding custom local one
![Screenshot_20191216_165646_io forus test_meapp](https://user-images.githubusercontent.com/10818702/70922059-bcb3cc00-2025-11ea-8edf-cb510914175d.jpg)
![Screenshot_20191216_165857_io forus test_meapp](https://user-images.githubusercontent.com/10818702/70922064-bde4f900-2025-11ea-8214-05236e58274c.jpg)
| priority | me ios debug functionality main asssignee task add option to set any ip adress context goal add ways to debug easily like storing multiple identities access tokens changing base url adding custom local one | 1 |
103,222 | 4,165,518,841 | IssuesEvent | 2016-06-19 15:07:31 | rathena/rathena | https://api.github.com/repos/rathena/rathena | closed | Soul Breaker | bug:skill mode:renewal priority:medium server:map status:confirmed | The magical part misses on ghosts, while it should be non-elemental and thus hit ghost element.
http://irowiki.org/wiki/Soul_Breaker
"Magic portion
- Is Non-elemental. Meaning it is 100% unaffected by property of the target. This means that it will do 100% to Ghost property." | 1.0 | Soul Breaker - The magical part misses on ghosts, while it should be non-elemental and thus hit ghost element.
http://irowiki.org/wiki/Soul_Breaker
"Magic portion
- Is Non-elemental. Meaning it is 100% unaffected by property of the target. This means that it will do 100% to Ghost property." | priority | soul breaker the magical part misses on ghosts while it should be non elemental and thus hit ghost element magic portion is non elemental meaning it is unaffected by property of the target this means that it will do to ghost property | 1 |
799,316 | 28,304,430,870 | IssuesEvent | 2023-04-10 09:32:37 | AY2223S2-CS2103T-W15-2/tp | https://api.github.com/repos/AY2223S2-CS2103T-W15-2/tp | closed | As a familiar user, I can sort orders by date | type.Story priority.Medium user.Orders | ... so that I can keep track of orders due the earliest | 1.0 | As a familiar user, I can sort orders by date - ... so that I can keep track of orders due the earliest | priority | as a familiar user i can sort orders by date so that i can keep track of orders due the earliest | 1 |
525,517 | 15,255,361,445 | IssuesEvent | 2021-02-20 15:48:55 | CookieJarApps/SmartCookieWeb | https://api.github.com/repos/CookieJarApps/SmartCookieWeb | opened | Override website zoom not working with javascript disabled | P2: Medium priority bug | Title is self explaining
This is not a major bug which many people use but
Override zoom in Smartcookieweb is not working with javascript disabled but in Bromite browser,it is working.
To Reproduce
Steps to reproduce the behavior:
1. Go to Setting
2. Click on General
3. Enable Override Website Zoom block
4. Disable Javascript
5. Go to any website say(www.flipkart.com)
6. Pinch to zoom in or out
Expected behavior
Override zoom should be working regardless of javascript
Screenrecords:
https://user-images.githubusercontent.com/75420364/108601102-f32e5a80-733e-11eb-90aa-12c376f9ad5b.mp4
Working properly in Bromite browser
https://user-images.githubusercontent.com/75420364/108601115-04776700-733f-11eb-87c6-5938bb191f31.mp4
Device Info
- Device: Poco M2 Pro
- OS: Arrow OS Vanilla Android 11
- App version: 12.1.1
Thank you very much | 1.0 | Override website zoom not working with javascript disabled - Title is self explaining
This is not a major bug which many people use but
Override zoom in Smartcookieweb is not working with javascript disabled but in Bromite browser,it is working.
To Reproduce
Steps to reproduce the behavior:
1. Go to Setting
2. Click on General
3. Enable Override Website Zoom block
4. Disable Javascript
5. Go to any website say(www.flipkart.com)
6. Pinch to zoom in or out
Expected behavior
Override zoom should be working regardless of javascript
Screenrecords:
https://user-images.githubusercontent.com/75420364/108601102-f32e5a80-733e-11eb-90aa-12c376f9ad5b.mp4
Working properly in Bromite browser
https://user-images.githubusercontent.com/75420364/108601115-04776700-733f-11eb-87c6-5938bb191f31.mp4
Device Info
- Device: Poco M2 Pro
- OS: Arrow OS Vanilla Android 11
- App version: 12.1.1
Thank you very much | priority | override website zoom not working with javascript disabled title is self explaining this is not a major bug which many people use but override zoom in smartcookieweb is not working with javascript disabled but in bromite browser it is working to reproduce steps to reproduce the behavior go to setting click on general enable override website zoom block disable javascript go to any website say pinch to zoom in or out expected behavior override zoom should be working regardless of javascript screenrecords working properly in bromite browser device info device poco pro os arrow os vanilla android app version thank you very much | 1 |
471,194 | 13,562,144,726 | IssuesEvent | 2020-09-18 06:16:56 | AY2021S1-CS2103-F09-1/tp | https://api.github.com/repos/AY2021S1-CS2103-F09-1/tp | opened | Update AddressBook: Remove irrelevant tests | priority.Medium | **Wait for #18 and #19 to be completed first**
- Identify relevant and irrelevant tests | 1.0 | Update AddressBook: Remove irrelevant tests - **Wait for #18 and #19 to be completed first**
- Identify relevant and irrelevant tests | priority | update addressbook remove irrelevant tests wait for and to be completed first identify relevant and irrelevant tests | 1 |
423,878 | 12,303,285,843 | IssuesEvent | 2020-05-11 18:25:39 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | samples/subsys/shell/shell_module doesn't work on qemu_x86_64 | area: SMP area: Shell bug priority: medium | **Describe the bug**
When I run this sample under qemu_x86_64 I do not get a shell prompt.
**Expected behavior**
An interactive shell prompt should be available that accepts commands.
**Impact**
Not sure if this is a sample issue or the shell subsystem is broken in some way. Could be an issue with the x86_64 arch, or an SMP problem.
| 1.0 | samples/subsys/shell/shell_module doesn't work on qemu_x86_64 - **Describe the bug**
When I run this sample under qemu_x86_64 I do not get a shell prompt.
**Expected behavior**
An interactive shell prompt should be available that accepts commands.
**Impact**
Not sure if this is a sample issue or the shell subsystem is broken in some way. Could be an issue with the x86_64 arch, or an SMP problem.
| priority | samples subsys shell shell module doesn t work on qemu describe the bug when i run this sample under qemu i do not get a shell prompt expected behavior an interactive shell prompt should be available that accepts commands impact not sure if this is a sample issue or the shell subsystem is broken in some way could be an issue with the arch or an smp problem | 1 |
741,413 | 25,794,182,153 | IssuesEvent | 2022-12-10 11:19:20 | Razvan00Rusu/SOMAS2022-Team6 | https://api.github.com/repos/Razvan00Rusu/SOMAS2022-Team6 | opened | [Leader] - Handle fights | medium priority | Implement the following functions:
- [ ] FightResolution
- [ ] HandleFightProposal
This can probably reuse some logic from #15 | 1.0 | [Leader] - Handle fights - Implement the following functions:
- [ ] FightResolution
- [ ] HandleFightProposal
This can probably reuse some logic from #15 | priority | handle fights implement the following functions fightresolution handlefightproposal this can probably reuse some logic from | 1 |
799,165 | 28,300,842,707 | IssuesEvent | 2023-04-10 05:52:19 | space-wizards/space-station-14 | https://api.github.com/repos/space-wizards/space-station-14 | opened | Failed publishes should yell somewhere in discord. | Priority: 3-Not Required Issue: Feature Request Difficulty: 2-Medium | e.g. if someone forgets an engine version tag the publish will fail and no one will realise until a pr tries to use a new engine version. | 1.0 | Failed publishes should yell somewhere in discord. - e.g. if someone forgets an engine version tag the publish will fail and no one will realise until a pr tries to use a new engine version. | priority | failed publishes should yell somewhere in discord e g if someone forgets an engine version tag the publish will fail and no one will realise until a pr tries to use a new engine version | 1 |
604,317 | 18,681,488,447 | IssuesEvent | 2021-11-01 06:32:45 | nimblehq/nimble-medium-ios | https://api.github.com/repos/nimblehq/nimble-medium-ios | closed | As a user, I can edit my article from the article details screen | type : feature category : ui priority : medium | ## Why
When the users login the application successfully, they can edit their own article from the `Article Details` screen.
## Acceptance Criteria
- [ ] On the `Article Details` screen UI layout from #19, add an edit button to the top right corner of the article header section as show in the below design.
- [ ] Reuse the edit article icon from the UI layout task #39.
- [ ] Hide the button by default.
## Resources
- Sample screen UI layout:
<img width="572" alt="Screen Shot 2021-08-19 at 14 03 16 copy" src="https://user-images.githubusercontent.com/70877098/130316973-022dce29-d031-4f93-8e06-3303a128da13.png">
| 1.0 | As a user, I can edit my article from the article details screen - ## Why
When the users login the application successfully, they can edit their own article from the `Article Details` screen.
## Acceptance Criteria
- [ ] On the `Article Details` screen UI layout from #19, add an edit button to the top right corner of the article header section as show in the below design.
- [ ] Reuse the edit article icon from the UI layout task #39.
- [ ] Hide the button by default.
## Resources
- Sample screen UI layout:
<img width="572" alt="Screen Shot 2021-08-19 at 14 03 16 copy" src="https://user-images.githubusercontent.com/70877098/130316973-022dce29-d031-4f93-8e06-3303a128da13.png">
| priority | as a user i can edit my article from the article details screen why when the users login the application successfully they can edit their own article from the article details screen acceptance criteria on the article details screen ui layout from add an edit button to the top right corner of the article header section as show in the below design reuse the edit article icon from the ui layout task hide the button by default resources sample screen ui layout img width alt screen shot at copy src | 1 |
220,212 | 7,354,122,376 | IssuesEvent | 2018-03-09 04:51:04 | CS2103JAN2018-W13-B4/main | https://api.github.com/repos/CS2103JAN2018-W13-B4/main | opened | 33. As a user I want to automatically set tasks with no priority level to the lowest level | priority.medium type.story | ... so that I can have priority levels for all tasks even if I had forgotten to set them | 1.0 | 33. As a user I want to automatically set tasks with no priority level to the lowest level - ... so that I can have priority levels for all tasks even if I had forgotten to set them | priority | as a user i want to automatically set tasks with no priority level to the lowest level so that i can have priority levels for all tasks even if i had forgotten to set them | 1 |
194,277 | 6,893,219,131 | IssuesEvent | 2017-11-23 01:56:39 | Marri/glowfic | https://api.github.com/repos/Marri/glowfic | closed | Searching for replies by an icon should be more clearly exposed | 2. high priority 8. medium enhancement good first issue | Currently it's hidden on the "Times Used" row of the icon stats page, and not displayed prominently anywhere; this was originally a hack and should be made into a proper feature somewhere.
An example place it could go: in the sidebar on an icon, which currently has things like "Posts". | 1.0 | Searching for replies by an icon should be more clearly exposed - Currently it's hidden on the "Times Used" row of the icon stats page, and not displayed prominently anywhere; this was originally a hack and should be made into a proper feature somewhere.
An example place it could go: in the sidebar on an icon, which currently has things like "Posts". | priority | searching for replies by an icon should be more clearly exposed currently it s hidden on the times used row of the icon stats page and not displayed prominently anywhere this was originally a hack and should be made into a proper feature somewhere an example place it could go in the sidebar on an icon which currently has things like posts | 1 |
244,832 | 7,880,654,698 | IssuesEvent | 2018-06-26 16:32:21 | aowen87/FOO | https://api.github.com/repos/aowen87/FOO | closed | Remove 'outline only' Mesh plot option | Expected Use: 3 - Occasional Impact: 3 - Medium OS: All Priority: Normal Support Group: Any Target Version: 2.10 feature | This is permanently disabled and has never functioned.
This should be removed from the Atts. | 1.0 | Remove 'outline only' Mesh plot option - This is permanently disabled and has never functioned.
This should be removed from the Atts. | priority | remove outline only mesh plot option this is permanently disabled and has never functioned this should be removed from the atts | 1 |
165,661 | 6,282,987,830 | IssuesEvent | 2017-07-19 01:24:26 | vmware/vic | https://api.github.com/repos/vmware/vic | closed | Add --detect-server-thumbprint to vic-ui | area/ui priority/medium | **User Statement:**
As a UI engineer I would want to improve the VIC UI installer for better user experience. As per #4254, the VIC UI plugin zip files would eventually be served in a web server running on the OVA appliance. This makes it possible to make the UI installer much leaner and more secure in a couple aspects such as not having to SSH into the VC with the root credentials. One change this new approach would demand though is a new flag of the `vic-ui` command that makes it easier to retrieve the SHA-1 thumbprint of the web server serving the VIC UI plugins.
**Details:**
By default the vSphere Client always uses a secure HTTP connection to retrieve the plugin from the server, so we can safely assume that we will always need to provide `--server-thumbprint` with the `vic-ui-$platform` command. The server thumbprint is currently to be fetched manually. By indicating through a flag like `--detect-server-thumbprint` while running `vic-ui-$platform install`, it will talk to the server, prompt the user for confirmation and proceed with the retrieved thumbprint.
**Acceptance Criteria:**
1. `vic-ui-$platform` supports a new flag that does what it's expected to do
2. It should warn the user and prompts for confirmation if they want to accept the thumbprint
| 1.0 | Add --detect-server-thumbprint to vic-ui - **User Statement:**
As a UI engineer I would want to improve the VIC UI installer for better user experience. As per #4254, the VIC UI plugin zip files would eventually be served in a web server running on the OVA appliance. This makes it possible to make the UI installer much leaner and more secure in a couple aspects such as not having to SSH into the VC with the root credentials. One change this new approach would demand though is a new flag of the `vic-ui` command that makes it easier to retrieve the SHA-1 thumbprint of the web server serving the VIC UI plugins.
**Details:**
By default the vSphere Client always uses a secure HTTP connection to retrieve the plugin from the server, so we can safely assume that we will always need to provide `--server-thumbprint` with the `vic-ui-$platform` command. The server thumbprint is currently to be fetched manually. By indicating through a flag like `--detect-server-thumbprint` while running `vic-ui-$platform install`, it will talk to the server, prompt the user for confirmation and proceed with the retrieved thumbprint.
**Acceptance Criteria:**
1. `vic-ui-$platform` supports a new flag that does what it's expected to do
2. It should warn the user and prompts for confirmation if they want to accept the thumbprint
| priority | add detect server thumbprint to vic ui user statement as a ui engineer i would want to improve the vic ui installer for better user experience as per the vic ui plugin zip files would eventually be served in a web server running on the ova appliance this makes it possible to make the ui installer much leaner and more secure in a couple aspects such as not having to ssh into the vc with the root credentials one change this new approach would demand though is a new flag of the vic ui command that makes it easier to retrieve the sha thumbprint of the web server serving the vic ui plugins details by default the vsphere client always uses a secure http connection to retrieve the plugin from the server so we can safely assume that we will always need to provide server thumbprint with the vic ui platform command the server thumbprint is currently to be fetched manually by indicating through a flag like detect server thumbprint while running vic ui platform install it will talk to the server prompt the user for confirmation and proceed with the retrieved thumbprint acceptance criteria vic ui platform supports a new flag that does what it s expected to do it should warn the user and prompts for confirmation if they want to accept the thumbprint | 1 |
144,158 | 5,536,699,413 | IssuesEvent | 2017-03-21 20:17:25 | CCAFS/MARLO | https://api.github.com/repos/CCAFS/MARLO | closed | Deliverables SubTypes MARLO Family agreement | Priority - Medium Specificity-PIM Type -Task | The MARLO group agreed on a list of types and sub-types of deliverables. It seems that another type has been added since: governance, administration and management. For PIM we would like the types and sub-types to stay unchanged from the original agreement. | 1.0 | Deliverables SubTypes MARLO Family agreement - The MARLO group agreed on a list of types and sub-types of deliverables. It seems that another type has been added since: governance, administration and management. For PIM we would like the types and sub-types to stay unchanged from the original agreement. | priority | deliverables subtypes marlo family agreement the marlo group agreed on a list of types and sub types of deliverables it seems that another type has been added since governance administration and management for pim we would like the types and sub types to stay unchanged from the original agreement | 1 |
256,221 | 8,127,040,414 | IssuesEvent | 2018-08-17 06:18:28 | aowen87/BAR | https://api.github.com/repos/aowen87/BAR | closed | Move vtk src from 3rd party to a higher level vendor branches loc in svn repo | Expected Use: 3 - Occasional Feature Impact: 3 - Medium Priority: Normal |
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 188
Status: Resolved
Project: VisIt
Tracker: Feature
Priority: Normal
Subject: Move vtk src from 3rd party to a higher level vendor branches loc in svn repo
Assigned to: Cyrus Harrison
Category:
Target version: 2.0.2
Author: Cyrus Harrison
Start: 06/29/2010
Due date:
% Done: 0
Estimated time:
Created: 06/29/2010 07:32 pm
Updated: 06/30/2010 01:04 pm
Likelihood:
Severity:
Found in version:
Impact: 3 - Medium
Expected Use: 3 - Occasional
OS: All
Support Group: Any
Description:
Comments:
B/c it contains a few versions of the entire vtk source, moving it to vendor_branches was makes it much easier to checkout the 'third_party' directory.Resolved on the trunk with checkins r11799 & r11800.
| 1.0 | Move vtk src from 3rd party to a higher level vendor branches loc in svn repo -
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. As such, not all
information was able to be captured in the transition. Below is
a complete record of the original redmine ticket.
Ticket number: 188
Status: Resolved
Project: VisIt
Tracker: Feature
Priority: Normal
Subject: Move vtk src from 3rd party to a higher level vendor branches loc in svn repo
Assigned to: Cyrus Harrison
Category:
Target version: 2.0.2
Author: Cyrus Harrison
Start: 06/29/2010
Due date:
% Done: 0
Estimated time:
Created: 06/29/2010 07:32 pm
Updated: 06/30/2010 01:04 pm
Likelihood:
Severity:
Found in version:
Impact: 3 - Medium
Expected Use: 3 - Occasional
OS: All
Support Group: Any
Description:
Comments:
B/c it contains a few versions of the entire vtk source, moving it to vendor_branches was makes it much easier to checkout the 'third_party' directory.Resolved on the trunk with checkins r11799 & r11800.
| priority | move vtk src from party to a higher level vendor branches loc in svn repo redmine migration this ticket was migrated from redmine as such not all information was able to be captured in the transition below is a complete record of the original redmine ticket ticket number status resolved project visit tracker feature priority normal subject move vtk src from party to a higher level vendor branches loc in svn repo assigned to cyrus harrison category target version author cyrus harrison start due date done estimated time created pm updated pm likelihood severity found in version impact medium expected use occasional os all support group any description comments b c it contains a few versions of the entire vtk source moving it to vendor branches was makes it much easier to checkout the third party directory resolved on the trunk with checkins | 1 |
61,941 | 3,163,498,728 | IssuesEvent | 2015-09-20 10:22:25 | MinetestForFun/server-minetestforfun-creative | https://api.github.com/repos/MinetestForFun/server-minetestforfun-creative | closed | Tweaked the [bonemeal] mod | Modding Priority: Medium | It doesn't wokr for all trees, need to tweak it for every plants/trees compatibility "instant growth" feature | 1.0 | Tweaked the [bonemeal] mod - It doesn't wokr for all trees, need to tweak it for every plants/trees compatibility "instant growth" feature | priority | tweaked the mod it doesn t wokr for all trees need to tweak it for every plants trees compatibility instant growth feature | 1 |
481,673 | 13,889,887,767 | IssuesEvent | 2020-10-19 08:31:58 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | opened | [0.9.1.0 beta staging-1806]Civics: Mint currency trigger review | Category: Laws Priority: Medium | Mint currency sub-trigger "Currency Amount" only count amount of instances you craft a currency, not total value of the craft which makes laws for currency amounts unnecessarily complex and can easily be circumvented if not done correctly.
Change currency amount into the total value of the currency its pointed towards.
![Screenshot_47.jpg](https://images.zenhubusercontent.com/5f5a7fa59878ed957a861886/a96c2715-d921-4814-90cd-a3cf3b3b0f93) | 1.0 | [0.9.1.0 beta staging-1806]Civics: Mint currency trigger review - Mint currency sub-trigger "Currency Amount" only count amount of instances you craft a currency, not total value of the craft which makes laws for currency amounts unnecessarily complex and can easily be circumvented if not done correctly.
Change currency amount into the total value of the currency its pointed towards.
![Screenshot_47.jpg](https://images.zenhubusercontent.com/5f5a7fa59878ed957a861886/a96c2715-d921-4814-90cd-a3cf3b3b0f93) | priority | civics mint currency trigger review mint currency sub trigger currency amount only count amount of instances you craft a currency not total value of the craft which makes laws for currency amounts unnecessarily complex and can easily be circumvented if not done correctly change currency amount into the total value of the currency its pointed towards | 1 |
121,525 | 4,817,690,739 | IssuesEvent | 2016-11-04 14:26:43 | Cadasta/cadasta-platform | https://api.github.com/repos/Cadasta/cadasta-platform | closed | Inlinemanual help panel isn't working correctly | bug medium priority ui/ux | ### Steps to reproduce the error
Select help link in header
### Actual behavior
Nothing happens
### Expected behavior
Should open help panel
| 1.0 | Inlinemanual help panel isn't working correctly - ### Steps to reproduce the error
Select help link in header
### Actual behavior
Nothing happens
### Expected behavior
Should open help panel
| priority | inlinemanual help panel isn t working correctly steps to reproduce the error select help link in header actual behavior nothing happens expected behavior should open help panel | 1 |
134,488 | 5,227,285,945 | IssuesEvent | 2017-01-28 00:52:19 | magnolo/newhere | https://api.github.com/repos/magnolo/newhere | closed | New User with existing email | front end medium priority NGO | If I wanna create a new user, but enter an already existing email, I get no feedback (error message).
| 1.0 | New User with existing email - If I wanna create a new user, but enter an already existing email, I get no feedback (error message).
| priority | new user with existing email if i wanna create a new user but enter an already existing email i get no feedback error message | 1 |
728,418 | 25,077,887,953 | IssuesEvent | 2022-11-07 16:48:34 | OpenLiberty/liberty-tools-vscode | https://api.github.com/repos/OpenLiberty/liberty-tools-vscode | closed | LS integration: Narrow activation scope of LS for Liberty server configuration | language client medium priority | Currently the liberty-langserver activates on any files named bootstrap.properties or server.env. We should narrow this scope to only activate on these files in certain directory structures:
bootstrap.properties - File named "bootstrap.properties" and exists in either a "src/main/liberty/config" or "usr/servers" directory
server.env - File named "server.env" and exists in either a "src/main/liberty/config" or "usr/servers" directory
See https://github.com/OpenLiberty/liberty-tools-intellij/issues/92
Related to #63 | 1.0 | LS integration: Narrow activation scope of LS for Liberty server configuration - Currently the liberty-langserver activates on any files named bootstrap.properties or server.env. We should narrow this scope to only activate on these files in certain directory structures:
bootstrap.properties - File named "bootstrap.properties" and exists in either a "src/main/liberty/config" or "usr/servers" directory
server.env - File named "server.env" and exists in either a "src/main/liberty/config" or "usr/servers" directory
See https://github.com/OpenLiberty/liberty-tools-intellij/issues/92
Related to #63 | priority | ls integration narrow activation scope of ls for liberty server configuration currently the liberty langserver activates on any files named bootstrap properties or server env we should narrow this scope to only activate on these files in certain directory structures bootstrap properties file named bootstrap properties and exists in either a src main liberty config or usr servers directory server env file named server env and exists in either a src main liberty config or usr servers directory see related to | 1 |
2,451 | 2,525,860,276 | IssuesEvent | 2015-01-21 06:53:29 | graybeal/ont | https://api.github.com/repos/graybeal/ont | closed | keep URI link field in ontmd | 1 star bug imported ontmd Priority-Medium voc2rdf wontfix | _From [[email protected]](https://code.google.com/u/113886747689301365533/) on May 05, 2009 17:23:15_
Voc2RDF currently shows a nice URI field (that is automatically updated
according to contents in other parts of the interface) in the upper right
side of the interface. For consistency, keep that piece of information in
the OntMd tool.
I'm using version 1.2.0.beta3 (20090416204347)
_Original issue: http://code.google.com/p/mmisw/issues/detail?id=130_ | 1.0 | keep URI link field in ontmd - _From [[email protected]](https://code.google.com/u/113886747689301365533/) on May 05, 2009 17:23:15_
Voc2RDF currently shows a nice URI field (that is automatically updated
according to contents in other parts of the interface) in the upper right
side of the interface. For consistency, keep that piece of information in
the OntMd tool.
I'm using version 1.2.0.beta3 (20090416204347)
_Original issue: http://code.google.com/p/mmisw/issues/detail?id=130_ | priority | keep uri link field in ontmd from on may currently shows a nice uri field that is automatically updated according to contents in other parts of the interface in the upper right side of the interface for consistency keep that piece of information in the ontmd tool i m using version original issue | 1 |
211,215 | 7,199,287,871 | IssuesEvent | 2018-02-05 15:32:21 | trellis-ldp/trellis | https://api.github.com/repos/trellis-ldp/trellis | opened | Add an OSGi-based deployment option | enhancement osgi priority/medium | It should be possible to deploy the Trellis triplestore-based implementation entirely in an OSGi container. This would involve writing up an OSGi-based wiring (e.g. Blueprint) and including some PaxExam tests. | 1.0 | Add an OSGi-based deployment option - It should be possible to deploy the Trellis triplestore-based implementation entirely in an OSGi container. This would involve writing up an OSGi-based wiring (e.g. Blueprint) and including some PaxExam tests. | priority | add an osgi based deployment option it should be possible to deploy the trellis triplestore based implementation entirely in an osgi container this would involve writing up an osgi based wiring e g blueprint and including some paxexam tests | 1 |
24 | 2,490,258,164 | IssuesEvent | 2015-01-02 11:37:48 | jackjonesfashion/wiz | https://api.github.com/repos/jackjonesfashion/wiz | closed | Activity Flow: Explore possibility of splitting campaigns from graphic briefs | Priority: medium | Separating the briefs would allow Operations to deploy as close as possible to the go-live date.
images belong to a campaign level, not at an activity level. | 1.0 | Activity Flow: Explore possibility of splitting campaigns from graphic briefs - Separating the briefs would allow Operations to deploy as close as possible to the go-live date.
images belong to a campaign level, not at an activity level. | priority | activity flow explore possibility of splitting campaigns from graphic briefs separating the briefs would allow operations to deploy as close as possible to the go live date images belong to a campaign level not at an activity level | 1 |
582,378 | 17,360,296,403 | IssuesEvent | 2021-07-29 19:34:47 | RobotLocomotion/drake | https://api.github.com/repos/RobotLocomotion/drake | opened | scene_graph_inspector: Possible to query `CollisionFiltered(FrameId, FrameId)`? | component: geometry proximity priority: medium team: dynamics | Context: For #15026 (\cc @azeey @jennuine)
From Slack: https://drakedevelopers.slack.com/archives/C2WBPQDB7/p1627584416099400
`SceneGraphInspector` has `CollisionFiltered(GeometryId, GeometryId)`, but does not have `CollisionFiltered(FrameId, FrameId)`.
For workflows such as `multibody_plant_subgraph`, it'd be nice to copy over collision filtering with the same semantics from the source to the destination `SceneGraph`.
For example (and paraphrasing Sean's example):
- User adds `frame_1`, with `geometry_a` and `geometry_b`.
- User declares filtering within geometry set containing only `frame_1`.
- User then adds `geometry_c`.
- I, uh, think this implies `geometry_{a,b,c}` all have their collisions filtered?
*Note*: This assumes that collision filtering has different, uh, "transient" properties of filtering w.r.t. frames. If the "transient" semantics are the same, then perhaps we close this issue hehe | 1.0 | scene_graph_inspector: Possible to query `CollisionFiltered(FrameId, FrameId)`? - Context: For #15026 (\cc @azeey @jennuine)
From Slack: https://drakedevelopers.slack.com/archives/C2WBPQDB7/p1627584416099400
`SceneGraphInspector` has `CollisionFiltered(GeometryId, GeometryId)`, but does not have `CollisionFiltered(FrameId, FrameId)`.
For workflows such as `multibody_plant_subgraph`, it'd be nice to copy over collision filtering with the same semantics from the source to the destination `SceneGraph`.
For example (and paraphrasing Sean's example):
- User adds `frame_1`, with `geometry_a` and `geometry_b`.
- User declares filtering within geometry set containing only `frame_1`.
- User then adds `geometry_c`.
- I, uh, think this implies `geometry_{a,b,c}` all have their collisions filtered?
*Note*: This assumes that collision filtering has different, uh, "transient" properties of filtering w.r.t. frames. If the "transient" semantics are the same, then perhaps we close this issue hehe | priority | scene graph inspector possible to query collisionfiltered frameid frameid context for cc azeey jennuine from slack scenegraphinspector has collisionfiltered geometryid geometryid but does not have collisionfiltered frameid frameid for workflows such as multibody plant subgraph it d be nice to copy over collision filtering with the same semantics from the source to the destination scenegraph for example and paraphrasing sean s example user adds frame with geometry a and geometry b user declares filtering within geometry set containing only frame user then adds geometry c i uh think this implies geometry a b c all have their collisions filtered note this assumes that collision filtering has different uh transient properties of filtering w r t frames if the transient semantics are the same then perhaps we close this issue hehe | 1 |
87,451 | 3,754,805,756 | IssuesEvent | 2016-03-12 07:00:05 | gama-platform/gama | https://api.github.com/repos/gama-platform/gama | opened | Ubuntu 14: monitor headers become invisible after the 2nd one | > Bug Affects Usability Concerns Interface OS Linux Priority Medium Version 1.7 beta | See picture (and also #1366 ). Even resizing the view does not make them appear.
<img width="530" alt="capture 2016-03-12 a 07 47 28" src="https://cloud.githubusercontent.com/assets/579256/13721457/5b841324-e828-11e5-972c-f0d97fda1bb0.PNG">
| 1.0 | Ubuntu 14: monitor headers become invisible after the 2nd one - See picture (and also #1366 ). Even resizing the view does not make them appear.
<img width="530" alt="capture 2016-03-12 a 07 47 28" src="https://cloud.githubusercontent.com/assets/579256/13721457/5b841324-e828-11e5-972c-f0d97fda1bb0.PNG">
| priority | ubuntu monitor headers become invisible after the one see picture and also even resizing the view does not make them appear img width alt capture a src | 1 |
691,082 | 23,682,285,249 | IssuesEvent | 2022-08-29 00:05:56 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | 9.0.4 Skid steer able to interact with stockpiles under itself even through 5+blocks | Priority: Medium Status: Fixed Type: Bug Category: Gameplay | I have stone road above stockpiles (touching the top of them) and it seems that the skid steer can interact with them?
It is as if the interact ability is passing through the 1 block of stone road, not sure if this is intentional or not
![image](https://user-images.githubusercontent.com/61889138/95834652-23cd0a80-0d35-11eb-9ea3-8e2c692d8720.png)
| 1.0 | 9.0.4 Skid steer able to interact with stockpiles under itself even through 5+blocks - I have stone road above stockpiles (touching the top of them) and it seems that the skid steer can interact with them?
It is as if the interact ability is passing through the 1 block of stone road, not sure if this is intentional or not
![image](https://user-images.githubusercontent.com/61889138/95834652-23cd0a80-0d35-11eb-9ea3-8e2c692d8720.png)
| priority | skid steer able to interact with stockpiles under itself even through blocks i have stone road above stockpiles touching the top of them and it seems that the skid steer can interact with them it is as if the interact ability is passing through the block of stone road not sure if this is intentional or not | 1 |