Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
957
| labels
stringlengths 4
795
| body
stringlengths 1
259k
| index
stringclasses 12
values | text_combine
stringlengths 96
259k
| label
stringclasses 2
values | text
stringlengths 96
252k
| binary_label
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
76,852 | 3,497,312,665 | IssuesEvent | 2016-01-06 00:15:27 | IQSS/dataverse | https://api.github.com/repos/IQSS/dataverse | opened | Dataverse: Edit General Info, cannot disable selected blocks by Use metadata fields from parent. | Component: UX & Upgrade Priority: Medium Status: Dev Type: Bug |
Related #1473
If in a sub dv you choose specific metadata blocks, then later decide to use parent blocks again, simply checking use parent and saving doesn't work, though it acts like it does: clears check boxes and saves but revisiting settings shows old values still there.
The way to deselect is to first deselect each selected block, save, then select use parent, save. Not obvious. | 1.0 | Dataverse: Edit General Info, cannot disable selected blocks by Use metadata fields from parent. -
Related #1473
If in a sub dv you choose specific metadata blocks, then later decide to use parent blocks again, simply checking use parent and saving doesn't work, though it acts like it does: clears check boxes and saves but revisiting settings shows old values still there.
The way to deselect is to first deselect each selected block, save, then select use parent, save. Not obvious. | priority | dataverse edit general info cannot disable selected blocks by use metadata fields from parent related if in a sub dv you choose specific metadata blocks then later decide to use parent blocks again simply checking use parent and saving doesn t work though it acts like it does clears check boxes and saves but revisiting settings shows old values still there the way to deselect is to first deselect each selected block save then select use parent save not obvious | 1 |
396,624 | 11,711,678,014 | IssuesEvent | 2020-03-09 06:06:57 | AY1920S2-CS2103T-W12-4/main | https://api.github.com/repos/AY1920S2-CS2103T-W12-4/main | opened | As a user who likes experimenting, I can give me a random recipe that I have added | priority.Medium status.Ongoing type.Story | .. so that I can challenge myself to cook what has been given
Command: `random`
| 1.0 | As a user who likes experimenting, I can give me a random recipe that I have added - .. so that I can challenge myself to cook what has been given
Command: `random`
| priority | as a user who likes experimenting i can give me a random recipe that i have added so that i can challenge myself to cook what has been given command random | 1 |
554,126 | 16,389,597,249 | IssuesEvent | 2021-05-17 14:36:12 | ruuvi/com.ruuvi.station | https://api.github.com/repos/ruuvi/com.ruuvi.station | closed | Starting without wifi access causes crash - 1.5.8 - defect | bug medium priority | If mobile is out of range of wifi or otherwise not connected to wifi app will crash.
May be related to issue #350 | 1.0 | Starting without wifi access causes crash - 1.5.8 - defect - If mobile is out of range of wifi or otherwise not connected to wifi app will crash.
May be related to issue #350 | priority | starting without wifi access causes crash defect if mobile is out of range of wifi or otherwise not connected to wifi app will crash may be related to issue | 1 |
497,476 | 14,371,366,516 | IssuesEvent | 2020-12-01 12:28:14 | replicate/replicate | https://api.github.com/repos/replicate/replicate | closed | Add development support for Linux | priority/medium type/bug | The development environment outlined in `CONTRIBUTING.md` currently does not support Linux systems. Fixing this would enable more developers to contribute to the project. | 1.0 | Add development support for Linux - The development environment outlined in `CONTRIBUTING.md` currently does not support Linux systems. Fixing this would enable more developers to contribute to the project. | priority | add development support for linux the development environment outlined in contributing md currently does not support linux systems fixing this would enable more developers to contribute to the project | 1 |
30,056 | 2,722,147,107 | IssuesEvent | 2015-04-14 00:24:15 | CruxFramework/crux-smart-faces | https://api.github.com/repos/CruxFramework/crux-smart-faces | closed | DialogBox without close button | bug imported Milestone-M14-C4 Module-CruxWidgets Priority-Medium TargetVersion-5.3.0 | _From [[email protected]](https://code.google.com/u/[email protected]/) on March 17, 2015 11:22:45_
DialogBox used in the showcase project does not have close button on the small view type.
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=639_ | 1.0 | DialogBox without close button - _From [[email protected]](https://code.google.com/u/[email protected]/) on March 17, 2015 11:22:45_
DialogBox used in the showcase project does not have close button on the small view type.
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=639_ | priority | dialogbox without close button from on march dialogbox used in the showcase project does not have close button on the small view type original issue | 1 |
598,710 | 18,250,675,389 | IssuesEvent | 2021-10-02 06:22:23 | FantasticoFox/VerifyPage | https://api.github.com/repos/FantasticoFox/VerifyPage | closed | Only show numbers of revision of the page file in question instead of backend rev_id's | medium priority feature UX | ![image](https://user-images.githubusercontent.com/45313235/134813671-c05542cd-16f9-4628-83b2-be222e9b2d23.png)
The backed revision ID's are only impotent for debugging purpose (so best is to have a debugging option to enable it). Useful for the user is to understand how many revisions the local file has. | 1.0 | Only show numbers of revision of the page file in question instead of backend rev_id's - ![image](https://user-images.githubusercontent.com/45313235/134813671-c05542cd-16f9-4628-83b2-be222e9b2d23.png)
The backed revision ID's are only impotent for debugging purpose (so best is to have a debugging option to enable it). Useful for the user is to understand how many revisions the local file has. | priority | only show numbers of revision of the page file in question instead of backend rev id s the backed revision id s are only impotent for debugging purpose so best is to have a debugging option to enable it useful for the user is to understand how many revisions the local file has | 1 |
671,022 | 22,738,969,675 | IssuesEvent | 2022-07-07 00:26:50 | PolyhedralDev/TerraOverworldConfig | https://api.github.com/repos/PolyhedralDev/TerraOverworldConfig | opened | Global heightmap refactor | enhancement priority=medium major | Rework all terrain to use a global height map, rather than determining general height via biome distribution. This will make height variation look significantly better as terrain won't need to be interpolated so much. Biome specific detailing can be done by different EQs that utilize the heightmap in different ways.
Here are some examples of an early implementation
![image](https://user-images.githubusercontent.com/73215501/177663884-9a71d7b1-c956-4887-ba8c-50d491ac2386.png)
![image](https://user-images.githubusercontent.com/73215501/177663933-86c67219-5751-4fda-b329-b8fea761e3d9.png)
![image](https://user-images.githubusercontent.com/73215501/177663946-b6f69c58-b74b-4c90-b786-d1a472ce3445.png)
| 1.0 | Global heightmap refactor - Rework all terrain to use a global height map, rather than determining general height via biome distribution. This will make height variation look significantly better as terrain won't need to be interpolated so much. Biome specific detailing can be done by different EQs that utilize the heightmap in different ways.
Here are some examples of an early implementation
![image](https://user-images.githubusercontent.com/73215501/177663884-9a71d7b1-c956-4887-ba8c-50d491ac2386.png)
![image](https://user-images.githubusercontent.com/73215501/177663933-86c67219-5751-4fda-b329-b8fea761e3d9.png)
![image](https://user-images.githubusercontent.com/73215501/177663946-b6f69c58-b74b-4c90-b786-d1a472ce3445.png)
| priority | global heightmap refactor rework all terrain to use a global height map rather than determining general height via biome distribution this will make height variation look significantly better as terrain won t need to be interpolated so much biome specific detailing can be done by different eqs that utilize the heightmap in different ways here are some examples of an early implementation | 1 |
30,370 | 2,723,600,757 | IssuesEvent | 2015-04-14 13:36:54 | CruxFramework/crux-widgets | https://api.github.com/repos/CruxFramework/crux-widgets | closed | ClassPathResolver section in UserManual is out of date | bug imported Milestone-3.0.0 Priority-Medium Wiki | _From [[email protected]](https://code.google.com/u/108972312674998482139/) on May 21, 2010 16:20:51_
What steps will reproduce the problem? 1.Go to Wiki/ UserManual 2.Check instructions for creating a WeblogicClassPathResolver
3.Check method public URL findWebBaseDir()
The document says to override method public URL findWebBaseDir(). However
the class ClassPathResolverImpl doesn't have this method. It has a similar
method:
public URL[] findWebBaseDirs().
Seems like this section of the UserManual is out of date. Could you guys
update it?
Cheers
B
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=115_ | 1.0 | ClassPathResolver section in UserManual is out of date - _From [[email protected]](https://code.google.com/u/108972312674998482139/) on May 21, 2010 16:20:51_
What steps will reproduce the problem? 1.Go to Wiki/ UserManual 2.Check instructions for creating a WeblogicClassPathResolver
3.Check method public URL findWebBaseDir()
The document says to override method public URL findWebBaseDir(). However
the class ClassPathResolverImpl doesn't have this method. It has a similar
method:
public URL[] findWebBaseDirs().
Seems like this section of the UserManual is out of date. Could you guys
update it?
Cheers
B
_Original issue: http://code.google.com/p/crux-framework/issues/detail?id=115_ | priority | classpathresolver section in usermanual is out of date from on may what steps will reproduce the problem go to wiki usermanual check instructions for creating a weblogicclasspathresolver check method public url findwebbasedir the document says to override method public url findwebbasedir however the class classpathresolverimpl doesn t have this method it has a similar method public url findwebbasedirs seems like this section of the usermanual is out of date could you guys update it cheers b original issue | 1 |
524,357 | 15,212,062,272 | IssuesEvent | 2021-02-17 09:55:19 | staxrip/staxrip | https://api.github.com/repos/staxrip/staxrip | closed | NVENC and QSVENS Subtitles file option must be removed | added/fixed/done bug priority medium | **Describe the bug**
The option Other > Subtitle File must be removed from :
- NVEnc : h264, h265
- QSVEnc : h264, h265
because this options allows to **mux (not harcode!)** a subtitle file to the output of NVEnc and QSVEnc. Since in Staxrip the output of the encoder is *.h264 or *.h265, this option makes crash.
| 1.0 | NVENC and QSVENS Subtitles file option must be removed - **Describe the bug**
The option Other > Subtitle File must be removed from :
- NVEnc : h264, h265
- QSVEnc : h264, h265
because this options allows to **mux (not harcode!)** a subtitle file to the output of NVEnc and QSVEnc. Since in Staxrip the output of the encoder is *.h264 or *.h265, this option makes crash.
| priority | nvenc and qsvens subtitles file option must be removed describe the bug the option other subtitle file must be removed from nvenc qsvenc because this options allows to mux not harcode a subtitle file to the output of nvenc and qsvenc since in staxrip the output of the encoder is or this option makes crash | 1 |
57,630 | 3,083,237,308 | IssuesEvent | 2015-08-24 07:30:21 | magro/memcached-session-manager | https://api.github.com/repos/magro/memcached-session-manager | closed | Support context configured with cookies="false" | bug imported Milestone-1.6.5 Priority-Medium | _From [[email protected]](https://code.google.com/u/102339389615967637599/) on April 02, 2013 09:22:14_
<b>What steps will reproduce the problem?</b>
1. tomcat forbid cookies
/conf/context.xml:
<Context cookies="false">
<!-- Default set of monitored resources -->
<WatchedResource>WEB-INF/web.xml</WatchedResource>
<Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
memcachedNodes="n1:192.168.1.55:11211,n2:192.168.1.56:11211"
sticky="false"
sessionBackupAsync="false"
requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$"
transcoderFactoryClass="de.javakaffee.web.msm.JavaSerializationTranscoderFactory"
/>
</Context>
2.start tomcat server,do Login request,
i put customer info in session,login success
but i do other action,like query customer info with "http://ip:port:/something;jsessionid=46A87DE63835612CAF557AF34013E18D-n2";
fail,note i'm not login.
why memcached do not save Session when tomcat forbid cookies?
3.logs:
2013-4-2 11:56:08 de.javakaffee.web.msm.SessionIdFormat createSessionId
良好: Creating new session id with orig id 'ping' and memcached id 'n1'.
2013-4-2 11:56:08 de.javakaffee.web.msm.NodeAvailabilityCache updateIsNodeAvailable
良好: CacheLoader returned node availability 'true' for node 'n1'.
2013-4-2 11:56:08 de.javakaffee.web.msm.SessionIdFormat createSessionId
良好: Creating new session id with orig id 'E6636323F89B006F4E86DEA12FC02653' and memcached id 'n1'.
2013-4-2 11:56:08 de.javakaffee.web.msm.MemcachedSessionService createSession
良好: Created new session with id E6636323F89B006F4E86DEA12FC02653-n1
2013-4-2 11:56:09 de.javakaffee.web.msm.MemcachedSessionService backupSession
良好: No session found in session map for E6636323F89B006F4E86DEA12FC02653-n1
2013-4-2 11:56:09 de.javakaffee.web.msm.LockingStrategy onBackupWithoutLoadedSession
警告: Found no validity info for session id E6636323F89B006F4E86DEA12FC02653-n1
2013-4-2 11:56:09 de.javakaffee.web.msm.RequestTrackingHostValve logDebugResponseCookie
良好: Request finished, with Set-Cookie header: JSESSIONID=E6636323F89B006F4E86DEA12FC02653-n1; Path=/; HttpOnly
2013-4-2 11:56:09 de.javakaffee.web.msm.RequestTrackingHostValve invoke
良好: <<<<<< Request finished: POST /ark/client/customer/login ==================
2013-4-2 11:56:09 de.javakaffee.web.msm.MemcachedSessionService backupSession
良好: No session found in session map for E6636323F89B006F4E86DEA12FC02653-n1
2013-4-2 11:56:09 de.javakaffee.web.msm.LockingStrategy onBackupWithoutLoadedSession
警告: Found no validity info for session id E6636323F89B006F4E86DEA12FC02653-n1
2013-4-2 11:56:09 de.javakaffee.web.msm.RequestTrackingHostValve logDebugResponseCookie
良好: Request finished, with Set-Cookie header: JSESSIONID=E6636323F89B006F4E86DEA12FC02653-n1; Path=/; HttpOnly
2013-4-2 11:56:09 de.javakaffee.web.msm.RequestTrackingHostValve invoke
良好: <<<<<< Request finished: POST /ark/client/customer/login ==================
2013-4-2 11:56:09 de.javakaffee.web.msm.MemcachedSessionService backupSession
良好: No session found in session map for E6636323F89B006F4E86DEA12FC02653-n1
2013-4-2 11:56:09 de.javakaffee.web.msm.LockingStrategy onBackupWithoutLoadedSession
警告: Found no validity info for session id E6636323F89B006F4E86DEA12FC02653-n1
2013-4-2 11:56:09 de.javakaffee.web.msm.RequestTrackingHostValve logDebugResponseCookie
良好: Request finished, with Set-Cookie header: JSESSIONID=E6636323F89B006F4E86DEA12FC02653-n1; Path=/; HttpOnly
2013-4-2 11:56:09 de.javakaffee.web.msm.RequestTrackingHostValve invoke
_Original issue: http://code.google.com/p/memcached-session-manager/issues/detail?id=159_ | 1.0 | Support context configured with cookies="false" - _From [[email protected]](https://code.google.com/u/102339389615967637599/) on April 02, 2013 09:22:14_
<b>What steps will reproduce the problem?</b>
1. tomcat forbid cookies
/conf/context.xml:
<Context cookies="false">
<!-- Default set of monitored resources -->
<WatchedResource>WEB-INF/web.xml</WatchedResource>
<Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
memcachedNodes="n1:192.168.1.55:11211,n2:192.168.1.56:11211"
sticky="false"
sessionBackupAsync="false"
requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$"
transcoderFactoryClass="de.javakaffee.web.msm.JavaSerializationTranscoderFactory"
/>
</Context>
2.start tomcat server,do Login request,
i put customer info in session,login success
but i do other action,like query customer info with "http://ip:port:/something;jsessionid=46A87DE63835612CAF557AF34013E18D-n2";
fail,note i'm not login.
why memcached do not save Session when tomcat forbid cookies?
3.logs:
2013-4-2 11:56:08 de.javakaffee.web.msm.SessionIdFormat createSessionId
良好: Creating new session id with orig id 'ping' and memcached id 'n1'.
2013-4-2 11:56:08 de.javakaffee.web.msm.NodeAvailabilityCache updateIsNodeAvailable
良好: CacheLoader returned node availability 'true' for node 'n1'.
2013-4-2 11:56:08 de.javakaffee.web.msm.SessionIdFormat createSessionId
良好: Creating new session id with orig id 'E6636323F89B006F4E86DEA12FC02653' and memcached id 'n1'.
2013-4-2 11:56:08 de.javakaffee.web.msm.MemcachedSessionService createSession
良好: Created new session with id E6636323F89B006F4E86DEA12FC02653-n1
2013-4-2 11:56:09 de.javakaffee.web.msm.MemcachedSessionService backupSession
良好: No session found in session map for E6636323F89B006F4E86DEA12FC02653-n1
2013-4-2 11:56:09 de.javakaffee.web.msm.LockingStrategy onBackupWithoutLoadedSession
警告: Found no validity info for session id E6636323F89B006F4E86DEA12FC02653-n1
2013-4-2 11:56:09 de.javakaffee.web.msm.RequestTrackingHostValve logDebugResponseCookie
良好: Request finished, with Set-Cookie header: JSESSIONID=E6636323F89B006F4E86DEA12FC02653-n1; Path=/; HttpOnly
2013-4-2 11:56:09 de.javakaffee.web.msm.RequestTrackingHostValve invoke
良好: <<<<<< Request finished: POST /ark/client/customer/login ==================
2013-4-2 11:56:09 de.javakaffee.web.msm.MemcachedSessionService backupSession
良好: No session found in session map for E6636323F89B006F4E86DEA12FC02653-n1
2013-4-2 11:56:09 de.javakaffee.web.msm.LockingStrategy onBackupWithoutLoadedSession
警告: Found no validity info for session id E6636323F89B006F4E86DEA12FC02653-n1
2013-4-2 11:56:09 de.javakaffee.web.msm.RequestTrackingHostValve logDebugResponseCookie
良好: Request finished, with Set-Cookie header: JSESSIONID=E6636323F89B006F4E86DEA12FC02653-n1; Path=/; HttpOnly
2013-4-2 11:56:09 de.javakaffee.web.msm.RequestTrackingHostValve invoke
良好: <<<<<< Request finished: POST /ark/client/customer/login ==================
2013-4-2 11:56:09 de.javakaffee.web.msm.MemcachedSessionService backupSession
良好: No session found in session map for E6636323F89B006F4E86DEA12FC02653-n1
2013-4-2 11:56:09 de.javakaffee.web.msm.LockingStrategy onBackupWithoutLoadedSession
警告: Found no validity info for session id E6636323F89B006F4E86DEA12FC02653-n1
2013-4-2 11:56:09 de.javakaffee.web.msm.RequestTrackingHostValve logDebugResponseCookie
良好: Request finished, with Set-Cookie header: JSESSIONID=E6636323F89B006F4E86DEA12FC02653-n1; Path=/; HttpOnly
2013-4-2 11:56:09 de.javakaffee.web.msm.RequestTrackingHostValve invoke
_Original issue: http://code.google.com/p/memcached-session-manager/issues/detail?id=159_ | priority | support context configured with cookies false from on april what steps will reproduce the problem tomcat forbid cookies conf context xml lt context cookies false gt lt default set of monitored resources gt lt watchedresource gt web inf web xml lt watchedresource gt lt manager classname de javakaffee web msm memcachedbackupsessionmanager memcachednodes sticky false sessionbackupasync false requesturiignorepattern ico png gif jpg css js transcoderfactoryclass de javakaffee web msm javaserializationtranscoderfactory gt lt context gt start tomcat server do login request i put customer info in session login success but i do other action like query customer info with fail note i m not login why memcached do not save session when tomcat forbid cookies logs de javakaffee web msm sessionidformat createsessionid 良好 creating new session id with orig id ping and memcached id de javakaffee web msm nodeavailabilitycache updateisnodeavailable 良好 cacheloader returned node availability true for node de javakaffee web msm sessionidformat createsessionid 良好 creating new session id with orig id and memcached id de javakaffee web msm memcachedsessionservice createsession 良好 created new session with id de javakaffee web msm memcachedsessionservice backupsession 良好 no session found in session map for de javakaffee web msm lockingstrategy onbackupwithoutloadedsession 警告 found no validity info for session id de javakaffee web msm requesttrackinghostvalve logdebugresponsecookie 良好 request finished with set cookie header jsessionid path httponly de javakaffee web msm requesttrackinghostvalve invoke 良好 lt lt lt lt lt lt request finished post ark client customer login de javakaffee web msm memcachedsessionservice backupsession 良好 no session found in session map for de javakaffee web msm lockingstrategy onbackupwithoutloadedsession 警告 found no validity info for session id de javakaffee web msm requesttrackinghostvalve logdebugresponsecookie 良好 request finished with set cookie header jsessionid path httponly de javakaffee web msm requesttrackinghostvalve invoke 良好 lt lt lt lt lt lt request finished post ark client customer login de javakaffee web msm memcachedsessionservice backupsession 良好 no session found in session map for de javakaffee web msm lockingstrategy onbackupwithoutloadedsession 警告 found no validity info for session id de javakaffee web msm requesttrackinghostvalve logdebugresponsecookie 良好 request finished with set cookie header jsessionid path httponly de javakaffee web msm requesttrackinghostvalve invoke original issue | 1 |
532,073 | 15,529,417,577 | IssuesEvent | 2021-03-13 15:07:42 | AY2021S2-CS2103-T14-2/tp | https://api.github.com/repos/AY2021S2-CS2103-T14-2/tp | opened | Improved Search Feature - search by ratings | priority.Medium | As a user, I can search for food diary entries by ratings, so that I can filter out the good places to eat at. | 1.0 | Improved Search Feature - search by ratings - As a user, I can search for food diary entries by ratings, so that I can filter out the good places to eat at. | priority | improved search feature search by ratings as a user i can search for food diary entries by ratings so that i can filter out the good places to eat at | 1 |
607,505 | 18,784,025,124 | IssuesEvent | 2021-11-08 10:11:47 | DiscordDungeons/Bugs | https://api.github.com/repos/DiscordDungeons/Bugs | closed | Reaping Ring Doesn't Work | Bug Bot Priority: Medium | **Describe the bug**
If a ring has the "reaping" attribute it doesn't do what it's supposed to do with boosts.
**To Reproduce**
Steps to reproduce the behavior:
1. Use a ring with reaping and see that you do not get 1.5x the amount in checking plants item - felix's plant ring
**Expected behavior**
For the boost to actually work
**Version**
(all versions since you need to add for all attributes to work xd, the issue happened with mineboost ring, salvaging ring xd)
4.15.4
**Additional context**
Best to actually go through all attributes that the ring can have and pre-add them all to work so any future ring that has a "new" boost that no one has will work | 1.0 | Reaping Ring Doesn't Work - **Describe the bug**
If a ring has the "reaping" attribute it doesn't do what it's supposed to do with boosts.
**To Reproduce**
Steps to reproduce the behavior:
1. Use a ring with reaping and see that you do not get 1.5x the amount in checking plants item - felix's plant ring
**Expected behavior**
For the boost to actually work
**Version**
(all versions since you need to add for all attributes to work xd, the issue happened with mineboost ring, salvaging ring xd)
4.15.4
**Additional context**
Best to actually go through all attributes that the ring can have and pre-add them all to work so any future ring that has a "new" boost that no one has will work | priority | reaping ring doesn t work describe the bug if a ring has the reaping attribute it doesn t do what it s supposed to do with boosts to reproduce steps to reproduce the behavior use a ring with reaping and see that you do not get the amount in checking plants item felix s plant ring expected behavior for the boost to actually work version all versions since you need to add for all attributes to work xd the issue happened with mineboost ring salvaging ring xd additional context best to actually go through all attributes that the ring can have and pre add them all to work so any future ring that has a new boost that no one has will work | 1 |
665,067 | 22,298,284,111 | IssuesEvent | 2022-06-13 05:53:22 | OpenFunction/OpenFunction | https://api.github.com/repos/OpenFunction/OpenFunction | closed | Let functions be triggered by GitHub events | Feature priority/medium | **Proposal**
- Motivation
GitHub is a mainstream code repository, and many developers choose to push their project on GitHub. To meet the needs of the CI\CD of the users, GitHub provides [Webhooks and events](https://docs.github.com/en/developers/webhooks-and-events).
OpenFunction's event framework is dedicated to improving OpenFunction's event responsiveness, so I thought it would be useful to introduce a solution for triggering OpenFunction functions from GitHub events.
- Goals
OpenFunction functions can be triggered by GitHub events.
- Example
In this case, it was necessary to introduce [Argo Events](https://argoproj.github.io/argo-events/), which supports rich event sources, and push Github events to Nats Streaming via Argo Events' [GitHub EventSource](https://argoproj.github.io/argo-events/eventsources/setup/github/).
After that, the OpenFunction Events Trigger, which is subscribed to Nats Streaming, can fetch GitHub events from it and wake up the OpenFunction functions to handle GitHub events.
![Schematic diagram](https://user-images.githubusercontent.com/2360535/159641207-4c94b90b-49c4-4567-ba2f-428b7962f019.png)
- Action Items
- Setup Argo Events
- Create a GitHub EventSource in Argo Events
- Create an Ingress to expose the EventSource service
- Configure the GitHub webhook to use the Ingress address as the callback address
- Docking OpenFunction Events EventBus with Nats Streaming enables OpenFunction Events Trigger to read events from Nats Streaming | 1.0 | Let functions be triggered by GitHub events - **Proposal**
- Motivation
GitHub is a mainstream code repository, and many developers choose to push their project on GitHub. To meet the needs of the CI\CD of the users, GitHub provides [Webhooks and events](https://docs.github.com/en/developers/webhooks-and-events).
OpenFunction's event framework is dedicated to improving OpenFunction's event responsiveness, so I thought it would be useful to introduce a solution for triggering OpenFunction functions from GitHub events.
- Goals
OpenFunction functions can be triggered by GitHub events.
- Example
In this case, it was necessary to introduce [Argo Events](https://argoproj.github.io/argo-events/), which supports rich event sources, and push Github events to Nats Streaming via Argo Events' [GitHub EventSource](https://argoproj.github.io/argo-events/eventsources/setup/github/).
After that, the OpenFunction Events Trigger, which is subscribed to Nats Streaming, can fetch GitHub events from it and wake up the OpenFunction functions to handle GitHub events.
![Schematic diagram](https://user-images.githubusercontent.com/2360535/159641207-4c94b90b-49c4-4567-ba2f-428b7962f019.png)
- Action Items
- Setup Argo Events
- Create a GitHub EventSource in Argo Events
- Create an Ingress to expose the EventSource service
- Configure the GitHub webhook to use the Ingress address as the callback address
- Docking OpenFunction Events EventBus with Nats Streaming enables OpenFunction Events Trigger to read events from Nats Streaming | priority | let functions be triggered by github events proposal motivation github is a mainstream code repository and many developers choose to push their project on github to meet the needs of the ci cd of the users github provides openfunction s event framework is dedicated to improving openfunction s event responsiveness so i thought it would be useful to introduce a solution for triggering openfunction functions from github events goals openfunction functions can be triggered by github events example in this case it was necessary to introduce which supports rich event sources and push github events to nats streaming via argo events after that the openfunction events trigger which is subscribed to nats streaming can fetch github events from it and wake up the openfunction functions to handle github events action items setup argo events create a github eventsource in argo events create an ingress to expose the eventsource service configure the github webhook to use the ingress address as the callback address docking openfunction events eventbus with nats streaming enables openfunction events trigger to read events from nats streaming | 1 |
722,884 | 24,877,365,720 | IssuesEvent | 2022-10-27 20:24:25 | AY2223S1-CS2113-W12-1/tp | https://api.github.com/repos/AY2223S1-CS2113-W12-1/tp | closed | Finish User Guide (before 28th Oct) | type.Task priority.Medium | We need UG before Friday 28th for PED
Format is similar to UG ip (command, description, note, Example, Expected Outcome) | 1.0 | Finish User Guide (before 28th Oct) - We need UG before Friday 28th for PED
Format is similar to UG ip (command, description, note, Example, Expected Outcome) | priority | finish user guide before oct we need ug before friday for ped format is similar to ug ip command description note example expected outcome | 1 |
71,388 | 3,356,379,343 | IssuesEvent | 2015-11-18 20:14:48 | TechReborn/TechReborn | https://api.github.com/repos/TechReborn/TechReborn | closed | Missing texture with standard machine casing. | bug Medium priority | Techreborn: 0.5.6.1004
reborncore:1.0.0.9
forge:10.13.4.1558
ic2/3:2.2.2.791
# Enable Connected textures
B:"Enable Connected textures"=false
![2015-11-12_22 39 56](https://cloud.githubusercontent.com/assets/8199121/11138944/ac85ef34-898e-11e5-8830-dce26ac8b0ae.png)
| 1.0 | Missing texture with standard machine casing. - Techreborn: 0.5.6.1004
reborncore:1.0.0.9
forge:10.13.4.1558
ic2/3:2.2.2.791
# Enable Connected textures
B:"Enable Connected textures"=false
![2015-11-12_22 39 56](https://cloud.githubusercontent.com/assets/8199121/11138944/ac85ef34-898e-11e5-8830-dce26ac8b0ae.png)
| priority | missing texture with standard machine casing techreborn reborncore forge enable connected textures b enable connected textures false | 1 |
481,017 | 13,879,487,940 | IssuesEvent | 2020-10-17 14:40:29 | Unibo-PPS-1920/pps-19-motoScala | https://api.github.com/repos/Unibo-PPS-1920/pps-19-motoScala | closed | As a client I want to start a game with enemies that move with advanced ai based on the difficulty i select | backlog item priority:medium | - [x] #110
- [x] different ai behaviours
| 1.0 | As a client I want to start a game with enemies that move with advanced ai based on the difficulty i select - - [x] #110
- [x] different ai behaviours
| priority | as a client i want to start a game with enemies that move with advanced ai based on the difficulty i select different ai behaviours | 1 |
610,858 | 18,926,776,525 | IssuesEvent | 2021-11-17 10:22:01 | davidvavra/duna-valka-assassinu | https://api.github.com/repos/davidvavra/duna-valka-assassinu | opened | Rozsireni modulu jednotky - armady | priority-medium | Pokud bychom rozlisovali typ jednotky podrobneji, umozni nam to vylepsit infrastrukturu v nekolika ohledech.
Lepsi prezentace parametru armad
Armady maji nasledujici parametry viditelne pro hrace:
- jmeno,
- sila,
- velikost,
- moralka,
- general,
- specialni schopnost.
Vsechny tyto hodnoty muzeme uchovavat v jednom poli v ramci komentare pro hrace. Z hlediska citelnosti pro hrace a jednoduchosti uprav pro nas by bylo fajn mit novy typ, "aktivni/armada", ktery by navic mel parametry vyse (s vyjimkou jmena, ktere nepotrebujeme duplikovat). | 1.0 | Rozsireni modulu jednotky - armady - Pokud bychom rozlisovali typ jednotky podrobneji, umozni nam to vylepsit infrastrukturu v nekolika ohledech.
Lepsi prezentace parametru armad
Armady maji nasledujici parametry viditelne pro hrace:
- jmeno,
- sila,
- velikost,
- moralka,
- general,
- specialni schopnost.
Vsechny tyto hodnoty muzeme uchovavat v jednom poli v ramci komentare pro hrace. Z hlediska citelnosti pro hrace a jednoduchosti uprav pro nas by bylo fajn mit novy typ, "aktivni/armada", ktery by navic mel parametry vyse (s vyjimkou jmena, ktere nepotrebujeme duplikovat). | priority | rozsireni modulu jednotky armady pokud bychom rozlisovali typ jednotky podrobneji umozni nam to vylepsit infrastrukturu v nekolika ohledech lepsi prezentace parametru armad armady maji nasledujici parametry viditelne pro hrace jmeno sila velikost moralka general specialni schopnost vsechny tyto hodnoty muzeme uchovavat v jednom poli v ramci komentare pro hrace z hlediska citelnosti pro hrace a jednoduchosti uprav pro nas by bylo fajn mit novy typ aktivni armada ktery by navic mel parametry vyse s vyjimkou jmena ktere nepotrebujeme duplikovat | 1 |
345,860 | 10,374,137,445 | IssuesEvent | 2019-09-09 08:57:17 | 52North/sos4R | https://api.github.com/repos/52North/sos4R | closed | Test other xml parsing functions and add configuration parameters | priority medium | Add some unit tests for parsing responses and then try out different available xml parsing functions.
``` r
library("XML")
?xml
?xmlParse
?xmlTreeParse # try out internal nodes!
?xmlParseDoc
```
Also try out the options setting. The vector should be part of the SOS instance.
``` r
mySOS@xmlParseDocOptions
```
For available options see http://web.mit.edu/~r/current/arch/i386_linux26/lib/R/library/XML/html/xmlParseDoc.html and http://xmlsoft.org/xmllint.html
| 1.0 | Test other xml parsing functions and add configuration parameters - Add some unit tests for parsing responses and then try out different available xml parsing functions.
``` r
library("XML")
?xml
?xmlParse
?xmlTreeParse # try out internal nodes!
?xmlParseDoc
```
Also try out the options setting. The vector should be part of the SOS instance.
``` r
mySOS@xmlParseDocOptions
```
For available options see http://web.mit.edu/~r/current/arch/i386_linux26/lib/R/library/XML/html/xmlParseDoc.html and http://xmlsoft.org/xmllint.html
| priority | test other xml parsing functions and add configuration parameters add some unit tests for parsing responses and then try out different available xml parsing functions r library xml xml xmlparse xmltreeparse try out internal nodes xmlparsedoc also try out the options setting the vector should be part of the sos instance r mysos xmlparsedocoptions for available options see and | 1 |
85,254 | 3,688,510,955 | IssuesEvent | 2016-02-25 13:13:26 | leoncastillejos/sonar | https://api.github.com/repos/leoncastillejos/sonar | closed | send email on condition fixed | priority:medium type:enhancement | When an alert event is raised, it should automatically notify when the alert condition is fixed. | 1.0 | send email on condition fixed - When an alert event is raised, it should automatically notify when the alert condition is fixed. | priority | send email on condition fixed when an alert event is raised it should automatically notify when the alert condition is fixed | 1 |
249,131 | 7,953,878,638 | IssuesEvent | 2018-07-12 04:29:12 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | Storage UI Stockpile Duplication bug | Medium Priority Respond ASAP | I have 2 stockpiles that are bugged a glass and a brick stockpile which show up as doubles in the storage ui.
when I try to consolidate the items into the stockpiles it duplicates the materials.
I'm currently trying to find a fix to remove the stockpiles that duplicate items any ideas would be appreciated | 1.0 | Storage UI Stockpile Duplication bug - I have 2 stockpiles that are bugged a glass and a brick stockpile which show up as doubles in the storage ui.
when I try to consolidate the items into the stockpiles it duplicates the materials.
I'm currently trying to find a fix to remove the stockpiles that duplicate items any ideas would be appreciated | priority | storage ui stockpile duplication bug i have stockpiles that are bugged a glass and a brick stockpile which show up as doubles in the storage ui when i try to consolidate the items into the stockpiles it duplicates the materials i m currently trying to find a fix to remove the stockpiles that duplicate items any ideas would be appreciated | 1 |
605,733 | 18,739,783,808 | IssuesEvent | 2021-11-04 12:17:33 | ita-social-projects/TeachUA | https://api.github.com/repos/ita-social-projects/TeachUA | opened | [Registration] User can register with invalid password. | bug Frontend Priority: Medium Desktop | **Environment**: Windows 10, Google Chrome Version 92.0.4515.159, (64 бит)
**Reproducible**: Always
**Build found**: Last commit
**Steps to reproduce**
1. Go to https://speak-ukrainian.org.ua/dev/
2. Click on the profile drop-down list
3. Click on the 'Зареєструватися'
4. Enter the correct data in each field (except the 'Password' field)
5. In the 'Password' field enter data: '12345678'
**Actual result**
'Password' with data '12345678' are considered correct.
![q1](https://user-images.githubusercontent.com/89512453/140311627-300cb4fc-1226-4b87-8f8d-4ce7d9186aea.png)
**Expected result**
‘Password’ **must contain** the following LATIN characters: upper/lower-case, numbers and reserved characters (~, `, !, @, #, $, %, ^, &, (, ), _, =, +, {, }, [, ], /, |, :, ;,", <, >, ?) // RUS, UKR letters are not valid. Cannot be shorter than 8 characters and longer than 20 characters
Error message appears: “Пароль не може бути коротшим, ніж 8 та довшим, ніж 20 символів.
Пароль повинен містити великі/маленькі літери латинського алфавіту, цифри та спеціальні символи”
"User story #97
| 1.0 | [Registration] User can register with invalid password. - **Environment**: Windows 10, Google Chrome Version 92.0.4515.159, (64 бит)
**Reproducible**: Always
**Build found**: Last commit
**Steps to reproduce**
1. Go to https://speak-ukrainian.org.ua/dev/
2. Click on the profile drop-down list
3. Click on the 'Зареєструватися'
4. Enter the correct data in each field (except the 'Password' field)
5. In the 'Password' field enter data: '12345678'
**Actual result**
'Password' with data '12345678' are considered correct.
![q1](https://user-images.githubusercontent.com/89512453/140311627-300cb4fc-1226-4b87-8f8d-4ce7d9186aea.png)
**Expected result**
‘Password’ **must contain** the following LATIN characters: upper/lower-case, numbers and reserved characters (~, `, !, @, #, $, %, ^, &, (, ), _, =, +, {, }, [, ], /, |, :, ;,", <, >, ?) // RUS, UKR letters are not valid. Cannot be shorter than 8 characters and longer than 20 characters
Error message appears: “Пароль не може бути коротшим, ніж 8 та довшим, ніж 20 символів.
Пароль повинен містити великі/маленькі літери латинського алфавіту, цифри та спеціальні символи”
"User story #97
| priority | user can register with invalid password environment windows google chrome version бит reproducible always build found last commit steps to reproduce go to click on the profile drop down list click on the зареєструватися enter the correct data in each field except the password field in the password field enter data actual result password with data are considered correct expected result ‘password’ must contain the following latin characters upper lower case numbers and reserved characters rus ukr letters are not valid cannot be shorter than characters and longer than characters error message appears “пароль не може бути коротшим ніж та довшим ніж символів пароль повинен містити великі маленькі літери латинського алфавіту цифри та спеціальні символи” user story | 1 |
141,323 | 5,434,894,673 | IssuesEvent | 2017-03-05 12:10:44 | open-serious/open-serious | https://api.github.com/repos/open-serious/open-serious | closed | Projects no longer compile on Win32 after Linux port | os.win32 priority.medium | Due to certain changes made in the engine/game codebase when porting the code to Linux, the following projects no longer compile on Windows properly:
* `Shaders`
* `MakeFONT`
* `DecodeReport`
* `EngineGUI`
* `Modeler`
* `RCon`
* `SeriousSkaStudio`
* `DedicatedServer`
* `GameGUIMP`
* `WorldEditor`
All compile errors should be fixed. | 1.0 | Projects no longer compile on Win32 after Linux port - Due to certain changes made in the engine/game codebase when porting the code to Linux, the following projects no longer compile on Windows properly:
* `Shaders`
* `MakeFONT`
* `DecodeReport`
* `EngineGUI`
* `Modeler`
* `RCon`
* `SeriousSkaStudio`
* `DedicatedServer`
* `GameGUIMP`
* `WorldEditor`
All compile errors should be fixed. | priority | projects no longer compile on after linux port due to certain changes made in the engine game codebase when porting the code to linux the following projects no longer compile on windows properly shaders makefont decodereport enginegui modeler rcon seriousskastudio dedicatedserver gameguimp worldeditor all compile errors should be fixed | 1 |
462,006 | 13,239,675,538 | IssuesEvent | 2020-08-19 04:11:20 | reizuseharu/Diana | https://api.github.com/repos/reizuseharu/Diana | opened | Add Service | [priority] medium [type] enhancement | ### Description
Add Service class
### Details
Service class needed to separate business logic from client
### Acceptance Criteria
- Service capable of accessing SRC endpoint
### Tasks
- [ ] Add Service class
- [ ] Add Unit Tests | 1.0 | Add Service - ### Description
Add Service class
### Details
Service class needed to separate business logic from client
### Acceptance Criteria
- Service capable of accessing SRC endpoint
### Tasks
- [ ] Add Service class
- [ ] Add Unit Tests | priority | add service description add service class details service class needed to separate business logic from client acceptance criteria service capable of accessing src endpoint tasks add service class add unit tests | 1 |
55,036 | 3,071,846,365 | IssuesEvent | 2015-08-19 14:17:59 | RobotiumTech/robotium | https://api.github.com/repos/RobotiumTech/robotium | closed | solo.searchButton(String text, boolean onlyVisible) is not working properly. | bug imported Priority-Medium wontfix | _From [[email protected]](https://code.google.com/u/116639303515044083042/) on April 07, 2011 22:33:16_
What steps will reproduce the problem? 1.Launch the contacts.
2.From menu-> manage contacts-> select an account which has no contacts
3.call the solo.searchButton(String text, boolean onlyVisible) api. What is the expected output? What do you see instead? Actual:
======
Returns true even though the button is not present.
Expected:
========
Should return false when button is not available. What version of the product are you using? On what operating system? Android2.3 and robotium 2.2 Please provide any additional information below.
_Original issue: http://code.google.com/p/robotium/issues/detail?id=99_ | 1.0 | solo.searchButton(String text, boolean onlyVisible) is not working properly. - _From [[email protected]](https://code.google.com/u/116639303515044083042/) on April 07, 2011 22:33:16_
What steps will reproduce the problem? 1.Launch the contacts.
2.From menu-> manage contacts-> select an account which has no contacts
3.call the solo.searchButton(String text, boolean onlyVisible) api. What is the expected output? What do you see instead? Actual:
======
Returns true even though the button is not present.
Expected:
========
Should return false when button is not available. What version of the product are you using? On what operating system? Android2.3 and robotium 2.2 Please provide any additional information below.
_Original issue: http://code.google.com/p/robotium/issues/detail?id=99_ | priority | solo searchbutton string text boolean onlyvisible is not working properly from on april what steps will reproduce the problem launch the contacts from menu manage contacts select an account which has no contacts call the solo searchbutton string text boolean onlyvisible api what is the expected output what do you see instead actual returns true even though the button is not present expected should return false when button is not available what version of the product are you using on what operating system and robotium please provide any additional information below original issue | 1 |
730,158 | 25,162,428,822 | IssuesEvent | 2022-11-10 17:49:47 | CDCgov/prime-reportstream | https://api.github.com/repos/CDCgov/prime-reportstream | closed | HHS Protect Data Issue - Blanks for MSH-3.1 and MSH-3.2 | onboarding-ops support Medium Priority Needs refinement | ## Problem statement
NIH user Krishna, found some data quality issues in HHS Protect from ReportStream Data.
For all test results coming from Intrivo, the MSH-3 segments are being left blank:
MSH-3.1 (Sending system namespace)
MSH-3.2 (Sending system OID)
The MARS specifications requires that these be populated.
## What you need to know
I reviewed 1 Intrivo HL7 file dated 10/6/2022 and it looks like there's data in those fields.
MSH
^~\&
Intrivo^2.16.840.1.113434.6.2.2.1.1.2^ISO
Intrivo^2.16.840.1.113434.6.2.10411^CLIA
CDC PRIME^2.16.840.1.114222.4.1.237821^ISO
CDC PRIME^2.16.840.1.114222.4.1.237821^ISO
20221006165232+0000
ORU^R01^ORU_R01
`b8c4a9ff-53fe-4746-8602-03092cc3dbbf`
P
2.5.1
NE
NE
USA
UNICODE UTF-8
PHLabReport-NoAck^ELR251R1_Rcvr_Prof^2.16.840.1.113883.9.11^ISO
## Acceptance criteria
- Confirm whether the MSH-3.1 and MSH-3.2 fields are blank on ReportStream side.
-Figure out a way to send that over to HHS Protect with no blanks
## To do
- [ ] ...
| 1.0 | HHS Protect Data Issue - Blanks for MSH-3.1 and MSH-3.2 - ## Problem statement
NIH user Krishna, found some data quality issues in HHS Protect from ReportStream Data.
For all test results coming from Intrivo, the MSH-3 segments are being left blank:
MSH-3.1 (Sending system namespace)
MSH-3.2 (Sending system OID)
The MARS specifications requires that these be populated.
## What you need to know
I reviewed 1 Intrivo HL7 file dated 10/6/2022 and it looks like there's data in those fields.
MSH
^~\&
Intrivo^2.16.840.1.113434.6.2.2.1.1.2^ISO
Intrivo^2.16.840.1.113434.6.2.10411^CLIA
CDC PRIME^2.16.840.1.114222.4.1.237821^ISO
CDC PRIME^2.16.840.1.114222.4.1.237821^ISO
20221006165232+0000
ORU^R01^ORU_R01
`b8c4a9ff-53fe-4746-8602-03092cc3dbbf`
P
2.5.1
NE
NE
USA
UNICODE UTF-8
PHLabReport-NoAck^ELR251R1_Rcvr_Prof^2.16.840.1.113883.9.11^ISO
## Acceptance criteria
- Confirm whether the MSH-3.1 and MSH-3.2 fields are blank on ReportStream side.
-Figure out a way to send that over to HHS Protect with no blanks
## To do
- [ ] ...
| priority | hhs protect data issue blanks for msh and msh problem statement nih user krishna found some data quality issues in hhs protect from reportstream data for all test results coming from intrivo the msh segments are being left blank msh sending system namespace msh sending system oid the mars specifications requires that these be populated what you need to know i reviewed intrivo file dated and it looks like there s data in those fields msh intrivo iso intrivo clia cdc prime iso cdc prime iso oru oru p ne ne usa unicode utf phlabreport noack rcvr prof iso acceptance criteria confirm whether the msh and msh fields are blank on reportstream side figure out a way to send that over to hhs protect with no blanks to do | 1 |
220,414 | 7,359,811,979 | IssuesEvent | 2018-03-10 11:40:14 | Cloud-CV/EvalAI | https://api.github.com/repos/Cloud-CV/EvalAI | opened | UI Improvements in Leaderboard Entry | GSOC enhancement frontend medium-difficulty new-feature priority-high | - [ ] Show the meta data fields (**Team Members** and **Method Description**) in each leaderboard entry.
Hint: Please refer the screenshot.
![6](https://user-images.githubusercontent.com/12206047/37241823-a1524bf0-2485-11e8-88a0-1b7c99ccfa9a.jpg)
- [ ] The user should be able to edit the **Method Description** field if the entry is created by the same user.
| 1.0 | UI Improvements in Leaderboard Entry - - [ ] Show the meta data fields (**Team Members** and **Method Description**) in each leaderboard entry.
Hint: Please refer the screenshot.
![6](https://user-images.githubusercontent.com/12206047/37241823-a1524bf0-2485-11e8-88a0-1b7c99ccfa9a.jpg)
- [ ] The user should be able to edit the **Method Description** field if the entry is created by the same user.
| priority | ui improvements in leaderboard entry show the meta data fields team members and method description in each leaderboard entry hint please refer the screenshot the user should be able to edit the method description field if the entry is created by the same user | 1 |
351,643 | 10,521,697,610 | IssuesEvent | 2019-09-30 06:53:07 | AY1920S1-CS2103T-T09-2/main | https://api.github.com/repos/AY1920S1-CS2103T-T09-2/main | opened | As a student who is impatient i want to have simple commands | priority.Medium type.Story | so that i can input new entries quickly | 1.0 | As a student who is impatient i want to have simple commands - so that i can input new entries quickly | priority | as a student who is impatient i want to have simple commands so that i can input new entries quickly | 1 |
543,562 | 15,883,599,534 | IssuesEvent | 2021-04-09 17:37:15 | AY2021S2-CS2103T-W10-2/tp | https://api.github.com/repos/AY2021S2-CS2103T-W10-2/tp | closed | [PE-D] Unsure of what my current "list" filter is | priority.Medium severity.Low type.Chore | By executing this command,
![image.png](https://raw.githubusercontent.com/justgnohUG/ped/main/files/8354a56c-7d24-44b7-82c2-98ccf91fb191.png)
![image.png](https://raw.githubusercontent.com/justgnohUG/ped/main/files/7ee7b2b6-c133-408b-9913-3bf153f99bdb.png)
It is slightly difficult to find out what the current list filter is in place or was executed. Hence if I want to go up a level or recall what other filters I want to apply to list, I would have to do "list" and restart the process.
There was clear documentation of this feature however, it may be a good idea to indicate what filters have been applied for "list".
PS: Perhaps consider moving the note on "list", that other commands like "find" will only apply to that particular "list", into a more prominent section.
e.g:
![image.png](https://raw.githubusercontent.com/justgnohUG/ped/main/files/478aede4-7716-4162-aee5-08b64d90143d.png)
<!--session: 1617429915572-7ea4bbe5-9f37-4e7b-97d5-95926028d84a-->
-------------
Labels: `severity.Medium` `type.FeatureFlaw`
original: justgnohUG/ped#3 | 1.0 | [PE-D] Unsure of what my current "list" filter is - By executing this command,
![image.png](https://raw.githubusercontent.com/justgnohUG/ped/main/files/8354a56c-7d24-44b7-82c2-98ccf91fb191.png)
![image.png](https://raw.githubusercontent.com/justgnohUG/ped/main/files/7ee7b2b6-c133-408b-9913-3bf153f99bdb.png)
It is slightly difficult to find out what the current list filter is in place or was executed. Hence if I want to go up a level or recall what other filters I want to apply to list, I would have to do "list" and restart the process.
There was clear documentation of this feature however, it may be a good idea to indicate what filters have been applied for "list".
PS: Perhaps consider moving the note on "list", that other commands like "find" will only apply to that particular "list", into a more prominent section.
e.g:
![image.png](https://raw.githubusercontent.com/justgnohUG/ped/main/files/478aede4-7716-4162-aee5-08b64d90143d.png)
<!--session: 1617429915572-7ea4bbe5-9f37-4e7b-97d5-95926028d84a-->
-------------
Labels: `severity.Medium` `type.FeatureFlaw`
original: justgnohUG/ped#3 | priority | unsure of what my current list filter is by executing this command it is slightly difficult to find out what the current list filter is in place or was executed hence if i want to go up a level or recall what other filters i want to apply to list i would have to do list and restart the process there was clear documentation of this feature however it may be a good idea to indicate what filters have been applied for list ps perhaps consider moving the note on list that other commands like find will only apply to that particular list into a more prominent section e g labels severity medium type featureflaw original justgnohug ped | 1 |
4,500 | 2,552,678,989 | IssuesEvent | 2015-02-02 18:43:00 | Sistema-Integrado-Gestao-Academica/SiGA | https://api.github.com/repos/Sistema-Integrado-Gestao-Academica/SiGA | opened | Currículo de Curso | enhancement [Medium Priority] | Como **secretária acadêmica** [de determinado Curso] desejo alocar disciplinas (issue #31) no **Currículo** de determinado **Curso de Pós-Graduação** (issue #5 ) para que possa organizar corretamente a lista de oferta do meu curso a partir dos cursos existentes no sistema.
------------------------
| 1.0 | Currículo de Curso - Como **secretária acadêmica** [de determinado Curso] desejo alocar disciplinas (issue #31) no **Currículo** de determinado **Curso de Pós-Graduação** (issue #5 ) para que possa organizar corretamente a lista de oferta do meu curso a partir dos cursos existentes no sistema.
------------------------
| priority | currículo de curso como secretária acadêmica desejo alocar disciplinas issue no currículo de determinado curso de pós graduação issue para que possa organizar corretamente a lista de oferta do meu curso a partir dos cursos existentes no sistema | 1 |
355,050 | 10,576,036,948 | IssuesEvent | 2019-10-07 16:58:11 | compodoc/compodoc | https://api.github.com/repos/compodoc/compodoc | closed | [BUG] Same Description used for all Overload Functions | Priority: Medium Time: ~1 hour Type: Bug wontfix | <!--
> Please follow the issue template below for bug reports and queries.
> For issue, start the label of the title with [BUG]
> For feature requests, start the label of the title with [FEATURE] and explain your use case and ideas clearly below, you can remove sections which are not relevant.
-->
##### **Overview of the issue**
When adding a description to each overload function Compodoc adds the description together for all overloads and uses it for all of the functions.
![image](https://user-images.githubusercontent.com/22506071/37264999-f1c396a0-2586-11e8-86b9-3a8c07eae981.png)
##### **Operating System, Node.js, npm, compodoc version(s)**
Compodoc 1.0.9
##### **Angular configuration, a `package.json` file in the root folder**
##### **Compodoc installed globally or locally ?**
Locally
##### **Motivation for or Use Case**
Each override function should have it's own description.
##### **Reproduce the error**
<!-- an unambiguous set of steps to reproduce the error. -->
##### **Related issues**
##### **Suggest a Fix**
| 1.0 | [BUG] Same Description used for all Overload Functions - <!--
> Please follow the issue template below for bug reports and queries.
> For issue, start the label of the title with [BUG]
> For feature requests, start the label of the title with [FEATURE] and explain your use case and ideas clearly below, you can remove sections which are not relevant.
-->
##### **Overview of the issue**
When adding a description to each overload function Compodoc adds the description together for all overloads and uses it for all of the functions.
![image](https://user-images.githubusercontent.com/22506071/37264999-f1c396a0-2586-11e8-86b9-3a8c07eae981.png)
##### **Operating System, Node.js, npm, compodoc version(s)**
Compodoc 1.0.9
##### **Angular configuration, a `package.json` file in the root folder**
##### **Compodoc installed globally or locally ?**
Locally
##### **Motivation for or Use Case**
Each override function should have it's own description.
##### **Reproduce the error**
<!-- an unambiguous set of steps to reproduce the error. -->
##### **Related issues**
##### **Suggest a Fix**
| priority | same description used for all overload functions please follow the issue template below for bug reports and queries for issue start the label of the title with for feature requests start the label of the title with and explain your use case and ideas clearly below you can remove sections which are not relevant overview of the issue when adding a description to each overload function compodoc adds the description together for all overloads and uses it for all of the functions operating system node js npm compodoc version s compodoc angular configuration a package json file in the root folder compodoc installed globally or locally locally motivation for or use case each override function should have it s own description reproduce the error related issues suggest a fix | 1 |
411,193 | 12,015,528,672 | IssuesEvent | 2020-04-10 14:11:25 | inverse-inc/packetfence | https://api.github.com/repos/inverse-inc/packetfence | opened | Bandwith Accounting per hour for device is empty | Priority: Medium Type: Bug | **Describe the bug**
Bandwith Accounting per hour for device is empty
**To Reproduce**
Inline l2 setup, downloaded 1gb file, the hour repport is empty:
1. Go to https://mgmt_ip:1443/admin/alt#/reports/standard/chart/nodebandwidth/hour
![image](https://user-images.githubusercontent.com/1553962/78996734-3a3ea300-7b13-11ea-98fc-947c0a9c8113.png)
2. Go to https://mgmt_ip:1443/admin/alt#/reports/standard/chart/nodebandwidth/day
![image](https://user-images.githubusercontent.com/1553962/78996801-5cd0bc00-7b13-11ea-8305-58b57c2a2578.png)
**Expected behavior**
We should be able to see what happen in the last hour.
**Additional context**
```
MariaDB [pf]> select * from bandwidth_accounting_history;
+-----------------+---------------------+------------+-----------+-------------+-------------------+-----------+
| node_id | time_bucket | in_bytes | out_bytes | total_bytes | mac | tenant_id |
+-----------------+---------------------+------------+-----------+-------------+-------------------+-----------+
| 414483715395997 | 2020-04-10 09:00:00 | 1083687638 | 20356317 | 1104043955 | 78:f8:82:9f:11:9d | 1 |
| 414483715395997 | 2020-04-10 10:00:00 | 135 | 5439 | 5574 | 78:f8:82:9f:11:9d | 1 |
+-----------------+---------------------+------------+-----------+-------------+-------------------+-----------+
``` | 1.0 | Bandwith Accounting per hour for device is empty - **Describe the bug**
Bandwith Accounting per hour for device is empty
**To Reproduce**
Inline l2 setup, downloaded 1gb file, the hour repport is empty:
1. Go to https://mgmt_ip:1443/admin/alt#/reports/standard/chart/nodebandwidth/hour
![image](https://user-images.githubusercontent.com/1553962/78996734-3a3ea300-7b13-11ea-98fc-947c0a9c8113.png)
2. Go to https://mgmt_ip:1443/admin/alt#/reports/standard/chart/nodebandwidth/day
![image](https://user-images.githubusercontent.com/1553962/78996801-5cd0bc00-7b13-11ea-8305-58b57c2a2578.png)
**Expected behavior**
We should be able to see what happen in the last hour.
**Additional context**
```
MariaDB [pf]> select * from bandwidth_accounting_history;
+-----------------+---------------------+------------+-----------+-------------+-------------------+-----------+
| node_id | time_bucket | in_bytes | out_bytes | total_bytes | mac | tenant_id |
+-----------------+---------------------+------------+-----------+-------------+-------------------+-----------+
| 414483715395997 | 2020-04-10 09:00:00 | 1083687638 | 20356317 | 1104043955 | 78:f8:82:9f:11:9d | 1 |
| 414483715395997 | 2020-04-10 10:00:00 | 135 | 5439 | 5574 | 78:f8:82:9f:11:9d | 1 |
+-----------------+---------------------+------------+-----------+-------------+-------------------+-----------+
``` | priority | bandwith accounting per hour for device is empty describe the bug bandwith accounting per hour for device is empty to reproduce inline setup downloaded file the hour repport is empty go to go to expected behavior we should be able to see what happen in the last hour additional context mariadb select from bandwidth accounting history node id time bucket in bytes out bytes total bytes mac tenant id | 1 |
330,307 | 10,038,306,541 | IssuesEvent | 2019-07-18 14:54:13 | Citykleta/web-app | https://api.github.com/repos/Citykleta/web-app | closed | Route exploration | component:Itinerary enhancement priority:medium | At the moment, a user can quickly find a route going from A to B going through various intermediate points.
It would be however important to improve this part of the application.
- [ ] change the style of the displayed route to convey the sens of the route as the moment it just displays the path but no direction is shown
- [ ] propose alternatives
- [ ] give a way to the user to preview the instructions | 1.0 | Route exploration - At the moment, a user can quickly find a route going from A to B going through various intermediate points.
It would be however important to improve this part of the application.
- [ ] change the style of the displayed route to convey the sens of the route as the moment it just displays the path but no direction is shown
- [ ] propose alternatives
- [ ] give a way to the user to preview the instructions | priority | route exploration at the moment a user can quickly find a route going from a to b going through various intermediate points it would be however important to improve this part of the application change the style of the displayed route to convey the sens of the route as the moment it just displays the path but no direction is shown propose alternatives give a way to the user to preview the instructions | 1 |
152,724 | 5,867,980,866 | IssuesEvent | 2017-05-14 07:54:21 | tootsuite/mastodon | https://api.github.com/repos/tootsuite/mastodon | closed | Unable to add a media (Windows Phone 10) | bug priority - medium ui | Hello,
I searched and I thinked this issue was not opened yet.
When I try to add a media to a toot from Edge on Windows Phone 10 (Lumia 550), I've got a refresh, but media is not added and text is erased. This happens everytime I try.
I reproduce with mastodon.gougere.fr and mastodon.xyz.
xakan | 1.0 | Unable to add a media (Windows Phone 10) - Hello,
I searched and I thinked this issue was not opened yet.
When I try to add a media to a toot from Edge on Windows Phone 10 (Lumia 550), I've got a refresh, but media is not added and text is erased. This happens everytime I try.
I reproduce with mastodon.gougere.fr and mastodon.xyz.
xakan | priority | unable to add a media windows phone hello i searched and i thinked this issue was not opened yet when i try to add a media to a toot from edge on windows phone lumia i ve got a refresh but media is not added and text is erased this happens everytime i try i reproduce with mastodon gougere fr and mastodon xyz xakan | 1 |
676,015 | 23,113,372,247 | IssuesEvent | 2022-07-27 14:42:26 | status-im/status-desktop | https://api.github.com/repos/status-im/status-desktop | opened | `UserImage` component doesn't take privacy settings into account inside message replies | bug Chat priority 2: medium E:Bugfixes E:Settings | Here's a an example message history:
![Screenshot from 2022-07-27 16-34-27](https://user-images.githubusercontent.com/445106/181276025-1c941ee8-fb4e-487f-adf2-ecfcf145971a.png)
There's a reply, notice how the image of the message to be replied is shown while it's not shown in the original profile image.
The reason it's not shown in the original message is because by default, we don't show profile image.
When turning this setting on for "everyone" it looks like this:
![Screenshot from 2022-07-27 16-37-24](https://user-images.githubusercontent.com/445106/181276278-58d7d4a8-c182-4d7b-9e74-08873b043642.png)
This means, inside message replies `UserImage` doesn't seem to honor the settings.
| 1.0 | `UserImage` component doesn't take privacy settings into account inside message replies - Here's a an example message history:
![Screenshot from 2022-07-27 16-34-27](https://user-images.githubusercontent.com/445106/181276025-1c941ee8-fb4e-487f-adf2-ecfcf145971a.png)
There's a reply, notice how the image of the message to be replied is shown while it's not shown in the original profile image.
The reason it's not shown in the original message is because by default, we don't show profile image.
When turning this setting on for "everyone" it looks like this:
![Screenshot from 2022-07-27 16-37-24](https://user-images.githubusercontent.com/445106/181276278-58d7d4a8-c182-4d7b-9e74-08873b043642.png)
This means, inside message replies `UserImage` doesn't seem to honor the settings.
| priority | userimage component doesn t take privacy settings into account inside message replies here s a an example message history there s a reply notice how the image of the message to be replied is shown while it s not shown in the original profile image the reason it s not shown in the original message is because by default we don t show profile image when turning this setting on for everyone it looks like this this means inside message replies userimage doesn t seem to honor the settings | 1 |
226,701 | 7,522,370,920 | IssuesEvent | 2018-04-12 20:12:16 | CS2103JAN2018-F12-B4/main | https://api.github.com/repos/CS2103JAN2018-F12-B4/main | closed | Locate a customer directly. | priority.medium type.enhancement | Possible enhancement: be able to locate a customer directly instead of having to pull a pre-defined list first. | 1.0 | Locate a customer directly. - Possible enhancement: be able to locate a customer directly instead of having to pull a pre-defined list first. | priority | locate a customer directly possible enhancement be able to locate a customer directly instead of having to pull a pre defined list first | 1 |
58,127 | 3,087,817,953 | IssuesEvent | 2015-08-25 13:55:19 | pavel-pimenov/flylinkdc-r5xx | https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx | closed | Неверное отображение кумулятивной статистики приянтого/отданного по команде /stats | bug imported Priority-Medium | _From [[email protected]](https://code.google.com/u/101495626515388303633/) on October 28, 2013 00:37:39_
1. На протяжении многих бета версий ( r502 ) видел нулевую статистику отданного/принятого:
-=[ Total download: 0 Б. Total upload: 0 Б ]=-
2. Однако если выдать в чат команду /r , то статистика пишется правильная.
3. После этой команды /r по команде /stats тоже начинают выводиться правильные данные.
**Attachment:** [Fly_r15808_statistics.png](http://code.google.com/p/flylinkdc/issues/detail?id=1363)
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=1363_ | 1.0 | Неверное отображение кумулятивной статистики приянтого/отданного по команде /stats - _From [[email protected]](https://code.google.com/u/101495626515388303633/) on October 28, 2013 00:37:39_
1. На протяжении многих бета версий ( r502 ) видел нулевую статистику отданного/принятого:
-=[ Total download: 0 Б. Total upload: 0 Б ]=-
2. Однако если выдать в чат команду /r , то статистика пишется правильная.
3. После этой команды /r по команде /stats тоже начинают выводиться правильные данные.
**Attachment:** [Fly_r15808_statistics.png](http://code.google.com/p/flylinkdc/issues/detail?id=1363)
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=1363_ | priority | неверное отображение кумулятивной статистики приянтого отданного по команде stats from on october на протяжении многих бета версий видел нулевую статистику отданного принятого однако если выдать в чат команду r то статистика пишется правильная после этой команды r по команде stats тоже начинают выводиться правильные данные attachment original issue | 1 |
767,375 | 26,921,456,534 | IssuesEvent | 2023-02-07 10:44:45 | AUBGTheHUB/spa-website-2022 | https://api.github.com/repos/AUBGTheHUB/spa-website-2022 | closed | OnClick function for redirecting to Landing page | frontend medium priority SPA | Create an onClick function in React for redirecting users from 'subpages' (e.g. jobs page, or HackAUBG page) to the Landing page (main page) every time the user clicks the HUB logo/name in the Navbar.
| 1.0 | OnClick function for redirecting to Landing page - Create an onClick function in React for redirecting users from 'subpages' (e.g. jobs page, or HackAUBG page) to the Landing page (main page) every time the user clicks the HUB logo/name in the Navbar.
| priority | onclick function for redirecting to landing page create an onclick function in react for redirecting users from subpages e g jobs page or hackaubg page to the landing page main page every time the user clicks the hub logo name in the navbar | 1 |
795,784 | 28,086,121,953 | IssuesEvent | 2023-03-30 09:53:12 | robotframework/robotframework | https://api.github.com/repos/robotframework/robotframework | opened | Support type aliases in formats `'list[int]'` and `'int | float'` in argument conversion | enhancement priority: medium effort: medium | Our argument conversion typically uses based on actual types like `int`, `list[int]` and `int | float`, but we also support type aliases as strings like `'int'` or `'integer'`. The motivation for type aliases is to support types returned, for example, by dynamic libraries wrapping code using other languages. Such libraries can simply return type names as strings instead of mapping them to actual Python types.
There are two limitations with type aliases, though:
- It isn't possible to represent types with nested types like `'list[int]'`. Aliases always map to a single concrete type, not to nested types.
- Unions cannot be represented using "Python syntax" like `'int | float'`. It is possible to use a tuple like `('int', 'float')`, though, so this is mainly an inconvenience.
Implementing this enhancement requires two things:
- Support for parsing strings like `'list[int]'` and `'int | float'`. Results could be newish [TypeInfo](https://github.com/robotframework/robotframework/blob/6e6f3a595d800ff43e792c4a7c582e7bf6abc131/src/robot/running/arguments/argumentspec.py#L183) objects that were added to make Libdoc handle nested types properly (#4538). Probably we could add a new `TypeInfo.from_string` class method.
- Enhance type conversion to work with `TypeInfo`. Currently these objects are only used by Libdoc.
In addition to helping with libraries wrapping non-Python code, this enhancement would allow us to create argument converters based on Libdoc spec files. That would probably be useful for external tools such as editor plugins. | 1.0 | Support type aliases in formats `'list[int]'` and `'int | float'` in argument conversion - Our argument conversion typically uses based on actual types like `int`, `list[int]` and `int | float`, but we also support type aliases as strings like `'int'` or `'integer'`. The motivation for type aliases is to support types returned, for example, by dynamic libraries wrapping code using other languages. Such libraries can simply return type names as strings instead of mapping them to actual Python types.
There are two limitations with type aliases, though:
- It isn't possible to represent types with nested types like `'list[int]'`. Aliases always map to a single concrete type, not to nested types.
- Unions cannot be represented using "Python syntax" like `'int | float'`. It is possible to use a tuple like `('int', 'float')`, though, so this is mainly an inconvenience.
Implementing this enhancement requires two things:
- Support for parsing strings like `'list[int]'` and `'int | float'`. Results could be newish [TypeInfo](https://github.com/robotframework/robotframework/blob/6e6f3a595d800ff43e792c4a7c582e7bf6abc131/src/robot/running/arguments/argumentspec.py#L183) objects that were added to make Libdoc handle nested types properly (#4538). Probably we could add a new `TypeInfo.from_string` class method.
- Enhance type conversion to work with `TypeInfo`. Currently these objects are only used by Libdoc.
In addition to helping with libraries wrapping non-Python code, this enhancement would allow us to create argument converters based on Libdoc spec files. That would probably be useful for external tools such as editor plugins. | priority | support type aliases in formats list and int float in argument conversion our argument conversion typically uses based on actual types like int list and int float but we also support type aliases as strings like int or integer the motivation for type aliases is to support types returned for example by dynamic libraries wrapping code using other languages such libraries can simply return type names as strings instead of mapping them to actual python types there are two limitations with type aliases though it isn t possible to represent types with nested types like list aliases always map to a single concrete type not to nested types unions cannot be represented using python syntax like int float it is possible to use a tuple like int float though so this is mainly an inconvenience implementing this enhancement requires two things support for parsing strings like list and int float results could be newish objects that were added to make libdoc handle nested types properly probably we could add a new typeinfo from string class method enhance type conversion to work with typeinfo currently these objects are only used by libdoc in addition to helping with libraries wrapping non python code this enhancement would allow us to create argument converters based on libdoc spec files that would probably be useful for external tools such as editor plugins | 1 |
488,899 | 14,099,094,136 | IssuesEvent | 2020-11-06 00:29:23 | drashland/website | https://api.github.com/repos/drashland/website | closed | Write required documentation for deno-drash issue #427 (after_resource middleware hook) | Priority: Medium Remark: Deploy To Production Type: Chore | ## Summary
The following issue requires documentation before it can be closed:
https://github.com/drashland/deno-drash/issues/427
The following pull request is associated with the above issue:
https://github.com/drashland/deno-drash/issues/428 | 1.0 | Write required documentation for deno-drash issue #427 (after_resource middleware hook) - ## Summary
The following issue requires documentation before it can be closed:
https://github.com/drashland/deno-drash/issues/427
The following pull request is associated with the above issue:
https://github.com/drashland/deno-drash/issues/428 | priority | write required documentation for deno drash issue after resource middleware hook summary the following issue requires documentation before it can be closed the following pull request is associated with the above issue | 1 |
135,388 | 5,247,424,657 | IssuesEvent | 2017-02-01 12:59:41 | moodlepeers/moodle-mod_groupformation | https://api.github.com/repos/moodlepeers/moodle-mod_groupformation | opened | layout issues with clean theme or beuth03 theme | bug FE (frontend) Priority medium | 1. Inside the Group Formation page, the Move block and Actions in every block will disappear, the icons will not appear appropriatly!
2. On the beuth03 Theme (the offical theme for Beuth Hochschule) the navigation header will not appear appropriatly inside the page of the groupformation, the header will navigate to the right.
3. Width of the input type="text": At the beginning of creating a new group formation, the group formation name text (under General) is very narrow, the letters will not appear in the appropriate size! | 1.0 | layout issues with clean theme or beuth03 theme - 1. Inside the Group Formation page, the Move block and Actions in every block will disappear, the icons will not appear appropriatly!
2. On the beuth03 Theme (the offical theme for Beuth Hochschule) the navigation header will not appear appropriatly inside the page of the groupformation, the header will navigate to the right.
3. Width of the input type="text": At the beginning of creating a new group formation, the group formation name text (under General) is very narrow, the letters will not appear in the appropriate size! | priority | layout issues with clean theme or theme inside the group formation page the move block and actions in every block will disappear the icons will not appear appropriatly on the theme the offical theme for beuth hochschule the navigation header will not appear appropriatly inside the page of the groupformation the header will navigate to the right width of the input type text at the beginning of creating a new group formation the group formation name text under general is very narrow the letters will not appear in the appropriate size | 1 |
785,672 | 27,622,209,292 | IssuesEvent | 2023-03-10 01:45:30 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [YSQL] yb_enable_expression_pushdown for GIN index scan can yield incorrect results | kind/bug area/ysql priority/medium | Jira Link: [DB-5677](https://yugabyte.atlassian.net/browse/DB-5677)
### Description
yb_enable_expression_pushdown does not seem to work correctly. Refer to the slack thread - https://yugabyte.slack.com/archives/CAR5BCH29/p1677518458483219
The test case to reproduce the error is below:
```
drop table demo;
CREATE TABLE demo(
demo_id varchar(255) not null,
guid varchar(255) not null unique,
status varchar(255),
json_content jsonb not null,
primary key (demo_id)
);
CREATE INDEX ref_idx ON demo USING ybgin (json_content jsonb_path_ops) ;
insert into demo select x::text, x::text, x::text, ('{"externalReferences": [{"val":"'||x||'"}]}')::jsonb
from generate_series (1, 10) x;
-- wrong answer
set yb_enable_expression_pushdown=on;
select * from demo where json_content @> '{"externalReferences": [{"val":"9"}]}' and demo_id <> '9';
demo_id | guid | status | json_content
---------+------+--------+----------------------------------------
9 | 9 | 9 | {"externalReferences": [{"val": "9"}]}
-- correct answer
set yb_enable_expression_pushdown=off;
select * from demo where json_content @> '{"externalReferences": [{"val":"9"}]}' and demo_id <> '9';
yugabyte=# demo_id | guid | status | json_content
---------+------+--------+--------------
(0 rows)
```
[DB-5677]: https://yugabyte.atlassian.net/browse/DB-5677?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | 1.0 | [YSQL] yb_enable_expression_pushdown for GIN index scan can yield incorrect results - Jira Link: [DB-5677](https://yugabyte.atlassian.net/browse/DB-5677)
### Description
yb_enable_expression_pushdown does not seem to work correctly. Refer to the slack thread - https://yugabyte.slack.com/archives/CAR5BCH29/p1677518458483219
The test case to reproduce the error is below:
```
drop table demo;
CREATE TABLE demo(
demo_id varchar(255) not null,
guid varchar(255) not null unique,
status varchar(255),
json_content jsonb not null,
primary key (demo_id)
);
CREATE INDEX ref_idx ON demo USING ybgin (json_content jsonb_path_ops) ;
insert into demo select x::text, x::text, x::text, ('{"externalReferences": [{"val":"'||x||'"}]}')::jsonb
from generate_series (1, 10) x;
-- wrong answer
set yb_enable_expression_pushdown=on;
select * from demo where json_content @> '{"externalReferences": [{"val":"9"}]}' and demo_id <> '9';
demo_id | guid | status | json_content
---------+------+--------+----------------------------------------
9 | 9 | 9 | {"externalReferences": [{"val": "9"}]}
-- correct answer
set yb_enable_expression_pushdown=off;
select * from demo where json_content @> '{"externalReferences": [{"val":"9"}]}' and demo_id <> '9';
yugabyte=# demo_id | guid | status | json_content
---------+------+--------+--------------
(0 rows)
```
[DB-5677]: https://yugabyte.atlassian.net/browse/DB-5677?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | priority | yb enable expression pushdown for gin index scan can yield incorrect results jira link description yb enable expression pushdown does not seem to work correctly refer to the slack thread the test case to reproduce the error is below drop table demo create table demo demo id varchar not null guid varchar not null unique status varchar json content jsonb not null primary key demo id create index ref idx on demo using ybgin json content jsonb path ops insert into demo select x text x text x text externalreferences jsonb from generate series x wrong answer set yb enable expression pushdown on select from demo where json content externalreferences and demo id demo id guid status json content externalreferences correct answer set yb enable expression pushdown off select from demo where json content externalreferences and demo id yugabyte demo id guid status json content rows | 1 |
364,304 | 10,761,971,150 | IssuesEvent | 2019-10-31 22:07:49 | JuezUN/INGInious | https://api.github.com/repos/JuezUN/INGInious | closed | Deleting the tasks files is a difficult and tedious work | Feature request Medium Priority | This feature request consists of adding selectors and a general delete button for all the files. | 1.0 | Deleting the tasks files is a difficult and tedious work - This feature request consists of adding selectors and a general delete button for all the files. | priority | deleting the tasks files is a difficult and tedious work this feature request consists of adding selectors and a general delete button for all the files | 1 |
66,583 | 3,256,049,459 | IssuesEvent | 2015-10-20 11:59:03 | remkos/rads | https://api.github.com/repos/remkos/rads | opened | Add ALES retracker output | enhancement Priority-Medium | The ALES retracker output for range, sigma0 and SWH is available at PODAAC.
This will cover a 55-km swath along all global coastlines (5 km inland, 50 km off shore).
Requesting just the retracker output (instead of SGDRs) from NOCS. | 1.0 | Add ALES retracker output - The ALES retracker output for range, sigma0 and SWH is available at PODAAC.
This will cover a 55-km swath along all global coastlines (5 km inland, 50 km off shore).
Requesting just the retracker output (instead of SGDRs) from NOCS. | priority | add ales retracker output the ales retracker output for range and swh is available at podaac this will cover a km swath along all global coastlines km inland km off shore requesting just the retracker output instead of sgdrs from nocs | 1 |
449,971 | 12,977,648,759 | IssuesEvent | 2020-07-21 21:04:00 | Sage-Bionetworks/sageseqr | https://api.github.com/repos/Sage-Bionetworks/sageseqr | closed | Simplify plan | medium priority | `convert_geneids()` is better suited to be called from `get_biomart()` instead of from the plan. | 1.0 | Simplify plan - `convert_geneids()` is better suited to be called from `get_biomart()` instead of from the plan. | priority | simplify plan convert geneids is better suited to be called from get biomart instead of from the plan | 1 |
118,525 | 4,750,296,962 | IssuesEvent | 2016-10-22 08:39:51 | CS2103AUG2016-W11-C2/main | https://api.github.com/repos/CS2103AUG2016-W11-C2/main | closed | Clean up model class | priority.medium type.task | Remove unnecessary methods
Update Logic Manager Test to account for sorting
Remove name class
Improve code quality | 1.0 | Clean up model class - Remove unnecessary methods
Update Logic Manager Test to account for sorting
Remove name class
Improve code quality | priority | clean up model class remove unnecessary methods update logic manager test to account for sorting remove name class improve code quality | 1 |
552,057 | 16,193,897,412 | IssuesEvent | 2021-05-04 12:23:17 | ContinualAI/avalanche | https://api.github.com/repos/ContinualAI/avalanche | opened | Add summary table [Metrics Name, Brief Description] | Evaluation Feature - Medium Priority | Just by looking at the loggings or TB table some metrics may not be self-explicative by their names.
We should probably add a table [Metrics Name, Brief Description] in the api-doc to summarize this without going though the whole docstrings and implementation. | 1.0 | Add summary table [Metrics Name, Brief Description] - Just by looking at the loggings or TB table some metrics may not be self-explicative by their names.
We should probably add a table [Metrics Name, Brief Description] in the api-doc to summarize this without going though the whole docstrings and implementation. | priority | add summary table just by looking at the loggings or tb table some metrics may not be self explicative by their names we should probably add a table in the api doc to summarize this without going though the whole docstrings and implementation | 1 |
455,047 | 13,110,832,209 | IssuesEvent | 2020-08-04 21:29:24 | nco/nco | https://api.github.com/repos/nco/nco | closed | ncks flags `--json` and `--dt_fmt=3` do not work together | bug medium priority | Hello, I am using ncks to extract data, and I am very pleased with how we are able to convert any time dimension type into ISO8601, I think it is wonderful for standardising our time dimensions across multiple models.
But unfortunately there is a bug... If you are reading data as json, and want to change the time data to ISO8601 then the JSON is invalid.
Here are two one-liners that will fetch an example .nc file from unidata.ucar.edu, both examples have the `--json` flag, the difference between them is that one has the `--dt_mt=3` flag, and the other does not.
```ncks -d time,0 -v time --json https://www.unidata.ucar.edu/software/netcdf/examples/tos_O1_2001-2002.nc```
^ Valid JSON in the response.
```ncks -d time,0 -v time --json --dt_fmt=3 https://www.unidata.ucar.edu/software/netcdf/examples/tos_O1_2001-2002.nc```
^ Invalid JSON in the response
I am still new to nco and ncks, please let me know if I am doing something wrong...
Thank you for reading.
| 1.0 | ncks flags `--json` and `--dt_fmt=3` do not work together - Hello, I am using ncks to extract data, and I am very pleased with how we are able to convert any time dimension type into ISO8601, I think it is wonderful for standardising our time dimensions across multiple models.
But unfortunately there is a bug... If you are reading data as json, and want to change the time data to ISO8601 then the JSON is invalid.
Here are two one-liners that will fetch an example .nc file from unidata.ucar.edu, both examples have the `--json` flag, the difference between them is that one has the `--dt_mt=3` flag, and the other does not.
```ncks -d time,0 -v time --json https://www.unidata.ucar.edu/software/netcdf/examples/tos_O1_2001-2002.nc```
^ Valid JSON in the response.
```ncks -d time,0 -v time --json --dt_fmt=3 https://www.unidata.ucar.edu/software/netcdf/examples/tos_O1_2001-2002.nc```
^ Invalid JSON in the response
I am still new to nco and ncks, please let me know if I am doing something wrong...
Thank you for reading.
| priority | ncks flags json and dt fmt do not work together hello i am using ncks to extract data and i am very pleased with how we are able to convert any time dimension type into i think it is wonderful for standardising our time dimensions across multiple models but unfortunately there is a bug if you are reading data as json and want to change the time data to then the json is invalid here are two one liners that will fetch an example nc file from unidata ucar edu both examples have the json flag the difference between them is that one has the dt mt flag and the other does not ncks d time v time json valid json in the response ncks d time v time json dt fmt invalid json in the response i am still new to nco and ncks please let me know if i am doing something wrong thank you for reading | 1 |
639,621 | 20,760,021,525 | IssuesEvent | 2022-03-15 15:24:48 | ooni/probe | https://api.github.com/repos/ooni/probe | closed | cli: experimental should be UnattendedOK | bug good first issue priority/medium ooni/probe-cli | This issue is about the CLI not running the experimental nettests group in unattended mode. We should actually be running this group in unattended mode. We investigated and it seems there's no particular reason why we ended up doing that, except perhaps an (undocumented) excess of caution. We can close this issue once we've marked experimental as UnattendedOK. (This issue has been discussed with @hellais on Slack.)
To this end, we need to head over to https://github.com/ooni/probe-cli/blob/master/cmd/ooniprobe/internal/nettests/groups.go and mark experimental as UnattendedOK: true. While there it would also be nice to mark performance explicitly as UnattendedOK: false. We should also probably document why that's the case (i.e., we don't want to max out the bandwidth with background tests).
As regards testing this functionality, it would probably suffice to run `ooniprobe run unattended` to simulate running in unattended mode and manually verifying that the experimental group runs in this case. | 1.0 | cli: experimental should be UnattendedOK - This issue is about the CLI not running the experimental nettests group in unattended mode. We should actually be running this group in unattended mode. We investigated and it seems there's no particular reason why we ended up doing that, except perhaps an (undocumented) excess of caution. We can close this issue once we've marked experimental as UnattendedOK. (This issue has been discussed with @hellais on Slack.)
To this end, we need to head over to https://github.com/ooni/probe-cli/blob/master/cmd/ooniprobe/internal/nettests/groups.go and mark experimental as UnattendedOK: true. While there it would also be nice to mark performance explicitly as UnattendedOK: false. We should also probably document why that's the case (i.e., we don't want to max out the bandwidth with background tests).
As regards testing this functionality, it would probably suffice to run `ooniprobe run unattended` to simulate running in unattended mode and manually verifying that the experimental group runs in this case. | priority | cli experimental should be unattendedok this issue is about the cli not running the experimental nettests group in unattended mode we should actually be running this group in unattended mode we investigated and it seems there s no particular reason why we ended up doing that except perhaps an undocumented excess of caution we can close this issue once we ve marked experimental as unattendedok this issue has been discussed with hellais on slack to this end we need to head over to and mark experimental as unattendedok true while there it would also be nice to mark performance explicitly as unattendedok false we should also probably document why that s the case i e we don t want to max out the bandwidth with background tests as regards testing this functionality it would probably suffice to run ooniprobe run unattended to simulate running in unattended mode and manually verifying that the experimental group runs in this case | 1 |
136,952 | 5,290,783,890 | IssuesEvent | 2017-02-08 20:47:45 | urbit/urbit | https://api.github.com/repos/urbit/urbit | closed | New FP crashes with SIGABRT on some platforms | bug difficulty medium platform specific priority medium | @ohAitch and @galenwp report:
```
~zod:dojo> (add:rd .~2 .~2)address 0xfffffffffffffffd out of loom!
bail: oops
Abort trap: 6
```
More portability stuff. Looks OSX-specific, so I can't debug it because I have no access to OSX.
| 1.0 | New FP crashes with SIGABRT on some platforms - @ohAitch and @galenwp report:
```
~zod:dojo> (add:rd .~2 .~2)address 0xfffffffffffffffd out of loom!
bail: oops
Abort trap: 6
```
More portability stuff. Looks OSX-specific, so I can't debug it because I have no access to OSX.
| priority | new fp crashes with sigabrt on some platforms ohaitch and galenwp report zod dojo add rd address out of loom bail oops abort trap more portability stuff looks osx specific so i can t debug it because i have no access to osx | 1 |
643,991 | 20,962,200,728 | IssuesEvent | 2022-03-27 23:45:26 | Gilded-Games/The-Aether | https://api.github.com/repos/Gilded-Games/The-Aether | closed | Bug: Attempting to use Aether biomes in a Superflat world defaults to minecraft:plains. | priority/medium status/confirmed status/help-wanted status/pending-review type/crash feat/world-gen version/1.18 | **(No longer relevant)**
> **Steps to reproduce:**
> 1. Start creating a new superflat world
> 2. Edit the preset, change the default preset biome from `minecraft:plains` to `aether:aether_skylands`
> 3. Click "Done"
>
> Log: [debug.log](https://github.com/Gilded-Games/The-Aether/files/7198332/debug.log)
| 1.0 | Bug: Attempting to use Aether biomes in a Superflat world defaults to minecraft:plains. - **(No longer relevant)**
> **Steps to reproduce:**
> 1. Start creating a new superflat world
> 2. Edit the preset, change the default preset biome from `minecraft:plains` to `aether:aether_skylands`
> 3. Click "Done"
>
> Log: [debug.log](https://github.com/Gilded-Games/The-Aether/files/7198332/debug.log)
| priority | bug attempting to use aether biomes in a superflat world defaults to minecraft plains no longer relevant steps to reproduce start creating a new superflat world edit the preset change the default preset biome from minecraft plains to aether aether skylands click done log | 1 |
771,238 | 27,076,090,146 | IssuesEvent | 2023-02-14 10:40:58 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | First draft of Django backend for YB | kind/bug priority/medium area/ecosystem | Jira Link: [DB-1777](https://yugabyte.atlassian.net/browse/DB-1777)
Create a Django backend that can be used to run Django apps with YB as the database. Publish this backend to python repository so that it can be installed using `pip install ` | 1.0 | First draft of Django backend for YB - Jira Link: [DB-1777](https://yugabyte.atlassian.net/browse/DB-1777)
Create a Django backend that can be used to run Django apps with YB as the database. Publish this backend to python repository so that it can be installed using `pip install ` | priority | first draft of django backend for yb jira link create a django backend that can be used to run django apps with yb as the database publish this backend to python repository so that it can be installed using pip install | 1 |
51,533 | 3,013,020,779 | IssuesEvent | 2015-07-29 05:27:43 | yawlfoundation/yawl | https://api.github.com/repos/yawlfoundation/yawl | closed | Ability to rename roles | auto-migrated Priority-Medium Type-Enhancement | ```
It would be nice to be able to rename a role.
(Motivation: I am making an analysis in an organization and I have a first
draft of the process defined in YAWL. Suddenly, it turns out that there are
two types of assistants and I need to detail my role Assistant to Assistant
for Charges. I have already a number of users with playing that role in the
system.)
```
Original issue reported on code.google.com by `[email protected]` on 25 Mar 2009 at 3:21 | 1.0 | Ability to rename roles - ```
It would be nice to be able to rename a role.
(Motivation: I am making an analysis in an organization and I have a first
draft of the process defined in YAWL. Suddenly, it turns out that there are
two types of assistants and I need to detail my role Assistant to Assistant
for Charges. I have already a number of users with playing that role in the
system.)
```
Original issue reported on code.google.com by `[email protected]` on 25 Mar 2009 at 3:21 | priority | ability to rename roles it would be nice to be able to rename a role motivation i am making an analysis in an organization and i have a first draft of the process defined in yawl suddenly it turns out that there are two types of assistants and i need to detail my role assistant to assistant for charges i have already a number of users with playing that role in the system original issue reported on code google com by petia wo gmail com on mar at | 1 |
788,539 | 27,756,149,331 | IssuesEvent | 2023-03-16 02:46:08 | AY2223S2-CS2113-W15-1/tp | https://api.github.com/repos/AY2223S2-CS2113-W15-1/tp | opened | Allow user to input meal in one cli command | priority.Medium type.Story | As an experienced user, I want to be able to input my meal using just one command | 1.0 | Allow user to input meal in one cli command - As an experienced user, I want to be able to input my meal using just one command | priority | allow user to input meal in one cli command as an experienced user i want to be able to input my meal using just one command | 1 |
808,287 | 30,054,270,954 | IssuesEvent | 2023-06-28 05:02:43 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [DocDB] Parse packed row before other columns | kind/bug area/docdb priority/medium | Jira Link: [DB-6660](https://yugabyte.atlassian.net/browse/DB-6660)
### Description
When packed row is enabled, we could consider that most of the data is stored in packed rows.
And relatively small percentage is stored as updates to packed row.
Currently we copy encoded packed row to a temporary buffer.
Then check whether we have column update and pick column update or packed column value.
But since most of the data is stored as packed row.
It is more effective to parse packed row directly to output, and then apply updates to it.
### Warning: Please confirm that this issue does not contain any sensitive information
- [X] I confirm this issue does not contain any sensitive information.
[DB-6660]: https://yugabyte.atlassian.net/browse/DB-6660?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | 1.0 | [DocDB] Parse packed row before other columns - Jira Link: [DB-6660](https://yugabyte.atlassian.net/browse/DB-6660)
### Description
When packed row is enabled, we could consider that most of the data is stored in packed rows.
And relatively small percentage is stored as updates to packed row.
Currently we copy encoded packed row to a temporary buffer.
Then check whether we have column update and pick column update or packed column value.
But since most of the data is stored as packed row.
It is more effective to parse packed row directly to output, and then apply updates to it.
### Warning: Please confirm that this issue does not contain any sensitive information
- [X] I confirm this issue does not contain any sensitive information.
[DB-6660]: https://yugabyte.atlassian.net/browse/DB-6660?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | priority | parse packed row before other columns jira link description when packed row is enabled we could consider that most of the data is stored in packed rows and relatively small percentage is stored as updates to packed row currently we copy encoded packed row to a temporary buffer then check whether we have column update and pick column update or packed column value but since most of the data is stored as packed row it is more effective to parse packed row directly to output and then apply updates to it warning please confirm that this issue does not contain any sensitive information i confirm this issue does not contain any sensitive information | 1 |
763,261 | 26,749,516,202 | IssuesEvent | 2023-01-30 18:24:39 | Alluxio/alluxio | https://api.github.com/repos/Alluxio/alluxio | closed | Fuse remove file too slow | priority-medium area-fuse type-bug stale | Small-size files (100k) generated with 8 threads in ~ 3 min with fuse stressbench takes more than 30min to delete through POSIX API. Write type is `MUST_CACHE` so UFS is not involved. | 1.0 | Fuse remove file too slow - Small-size files (100k) generated with 8 threads in ~ 3 min with fuse stressbench takes more than 30min to delete through POSIX API. Write type is `MUST_CACHE` so UFS is not involved. | priority | fuse remove file too slow small size files generated with threads in min with fuse stressbench takes more than to delete through posix api write type is must cache so ufs is not involved | 1 |
326,702 | 9,959,791,567 | IssuesEvent | 2019-07-06 10:24:28 | AlexOth/swp-ss19 | https://api.github.com/repos/AlexOth/swp-ss19 | closed | Neuen Tätigkeitskatalog erstellen | help wanted medium priority question | Woher kommt die Liste der Gliederungsknoten, die Struktur für den Tätigkeitskatalog? | 1.0 | Neuen Tätigkeitskatalog erstellen - Woher kommt die Liste der Gliederungsknoten, die Struktur für den Tätigkeitskatalog? | priority | neuen tätigkeitskatalog erstellen woher kommt die liste der gliederungsknoten die struktur für den tätigkeitskatalog | 1 |
3,183 | 2,537,409,842 | IssuesEvent | 2015-01-26 20:23:25 | web2py/web2py | https://api.github.com/repos/web2py/web2py | opened | notnull constraint not set on foreign key | 2–5 stars bug imported Priority-Medium | _From [[email protected]](https://code.google.com/u/[email protected]/) on March 18, 2013 12:06:17_
What steps will reproduce the problem? 1. create two tables, add a field to reference the other table and set notnull=True. e.g.
db.define_table('mytable',
Field('name', 'string'))
db.define_table('mytable2',
Field('name', 'string'),
Field('mytable', db.mytable, notnull=True)) What is the expected output? What do you see instead? the not null constraint is missing in the created db table:
CREATE TABLE mytable2
(
id serial NOT NULL,
"name" character varying(512),
mytable integer,
CONSTRAINT mytable2_pkey PRIMARY KEY (id),
CONSTRAINT mytable2_mytable_fkey FOREIGN KEY (mytable)
REFERENCES mytable (id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE CASCADE
)
WITH (
OIDS=FALSE
);
ALTER TABLE mytable2 OWNER TO postgres;
instead it should be
mytable integer NOT NULL,
for the mytable column. What version of the product are you using? On what operating system? web2py 2.4.4, Windows 7, Postgres 9.0
_Original issue: http://code.google.com/p/web2py/issues/detail?id=1395_ | 1.0 | notnull constraint not set on foreign key - _From [[email protected]](https://code.google.com/u/[email protected]/) on March 18, 2013 12:06:17_
What steps will reproduce the problem? 1. create two tables, add a field to reference the other table and set notnull=True. e.g.
db.define_table('mytable',
Field('name', 'string'))
db.define_table('mytable2',
Field('name', 'string'),
Field('mytable', db.mytable, notnull=True)) What is the expected output? What do you see instead? the not null constraint is missing in the created db table:
CREATE TABLE mytable2
(
id serial NOT NULL,
"name" character varying(512),
mytable integer,
CONSTRAINT mytable2_pkey PRIMARY KEY (id),
CONSTRAINT mytable2_mytable_fkey FOREIGN KEY (mytable)
REFERENCES mytable (id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE CASCADE
)
WITH (
OIDS=FALSE
);
ALTER TABLE mytable2 OWNER TO postgres;
instead it should be
mytable integer NOT NULL,
for the mytable column. What version of the product are you using? On what operating system? web2py 2.4.4, Windows 7, Postgres 9.0
_Original issue: http://code.google.com/p/web2py/issues/detail?id=1395_ | priority | notnull constraint not set on foreign key from on march what steps will reproduce the problem create two tables add a field to reference the other table and set notnull true e g db define table mytable field name string db define table field name string field mytable db mytable notnull true what is the expected output what do you see instead the not null constraint is missing in the created db table create table id serial not null name character varying mytable integer constraint pkey primary key id constraint mytable fkey foreign key mytable references mytable id match simple on update no action on delete cascade with oids false alter table owner to postgres instead it should be mytable integer not null for the mytable column what version of the product are you using on what operating system windows postgres original issue | 1 |
171,136 | 6,479,621,557 | IssuesEvent | 2017-08-18 11:09:08 | vmware/harbor | https://api.github.com/repos/vmware/harbor | closed | The "Scan completed" is not null for unscanned images in "Image Details" Page | area/ui kind/bug priority/medium target/1.2.0 UI/tmp/PFH | ![latest_detail](https://user-images.githubusercontent.com/5835782/29345464-8ea88070-8271-11e7-90f1-e1f8432e7999.png)
The response of API:
```
{
"digest": "sha256:f3b3b28a45160805bb16542c9531888519430e9e6d6ffc09d72261b0d26ff74f",
"name": "latest",
"architecture": "amd64",
"os": "linux",
"docker_version": "17.03.1-ce",
"author": "",
"created": "2017-06-14T19:29:01.037740325Z",
"signature": null
}
``` | 1.0 | The "Scan completed" is not null for unscanned images in "Image Details" Page - ![latest_detail](https://user-images.githubusercontent.com/5835782/29345464-8ea88070-8271-11e7-90f1-e1f8432e7999.png)
The response of API:
```
{
"digest": "sha256:f3b3b28a45160805bb16542c9531888519430e9e6d6ffc09d72261b0d26ff74f",
"name": "latest",
"architecture": "amd64",
"os": "linux",
"docker_version": "17.03.1-ce",
"author": "",
"created": "2017-06-14T19:29:01.037740325Z",
"signature": null
}
``` | priority | the scan completed is not null for unscanned images in image details page the response of api digest name latest architecture os linux docker version ce author created signature null | 1 |
394,877 | 11,660,173,672 | IssuesEvent | 2020-03-03 02:26:35 | poissonconsulting/chk | https://api.github.com/repos/poissonconsulting/chk | closed | Update _pkgdown.yml groupings | Difficulty: 2 Intermediate Effort: 2 Medium Priority: 2 High Type: Docs | In particular make sure that it is complete and that it includes:
check_ functions
recently added chk_integer etc functions
| 1.0 | Update _pkgdown.yml groupings - In particular make sure that it is complete and that it includes:
check_ functions
recently added chk_integer etc functions
| priority | update pkgdown yml groupings in particular make sure that it is complete and that it includes check functions recently added chk integer etc functions | 1 |
298,107 | 9,195,904,586 | IssuesEvent | 2019-03-07 04:40:26 | samiha-rahman/SOEN341 | https://api.github.com/repos/samiha-rahman/SOEN341 | closed | Follow/Unfollow feature from the frontend | 1 SP Priority: MEDIUM Risk: LOW [Follow] front end user story | Add the follow/unfollow button from the frontend.
--------------
Task to follow:
- [x] Add the follow/unfollow (tail/unleash) buttons to the tweet (woof) #65
- [x] Display woofs by other users you follow #66
------------------------------
**ACCEPTANCE TEST**
Step # | Execution Procedure or Input | Expected Results/Outputs | Passed/Failed
--- | --- | --- | ---
1 | Login as a user and click on follow, go to Tailing page | User's posts and followed users' posts should appear on page | PASSED
2 | Login and go to Explore page | "Tail"/Un-leashed button should appear under each posts | PASSED
| 1.0 | Follow/Unfollow feature from the frontend - Add the follow/unfollow button from the frontend.
--------------
Task to follow:
- [x] Add the follow/unfollow (tail/unleash) buttons to the tweet (woof) #65
- [x] Display woofs by other users you follow #66
------------------------------
**ACCEPTANCE TEST**
Step # | Execution Procedure or Input | Expected Results/Outputs | Passed/Failed
--- | --- | --- | ---
1 | Login as a user and click on follow, go to Tailing page | User's posts and followed users' posts should appear on page | PASSED
2 | Login and go to Explore page | "Tail"/Un-leashed button should appear under each posts | PASSED
| priority | follow unfollow feature from the frontend add the follow unfollow button from the frontend task to follow add the follow unfollow tail unleash buttons to the tweet woof display woofs by other users you follow acceptance test step execution procedure or input expected results outputs passed failed login as a user and click on follow go to tailing page user s posts and followed users posts should appear on page passed login and go to explore page tail un leashed button should appear under each posts passed | 1 |
709,883 | 24,395,594,908 | IssuesEvent | 2022-10-04 18:57:31 | ChaosInitiative/Chaos-Source | https://api.github.com/repos/ChaosInitiative/Chaos-Source | opened | Bug: env_speaker/env_microphone does not work properly with npcs | Type: Bug What: Engine Priority 2: Medium Size 3: Medium Component: entities Component: audio | ### Describe the bug
Audio that originates from NPCs will not be able to come through the respective env_speaker, even with a proper soundscript setup. Precache warnings will also spew through the console:
```
[server] (379.53) output: (func_door,kleiner_teleport_lift_1) -> (trigger_teleportdoor_alyx,Kill)()
[server] (379.53) input kleiner_teleport_lift_1: trigger_teleportdoor_alyx.Kill()
[engine] SV_StartSound: *vo/k_lab/eli_phenom02.wav not precached (0)
[engine] SV_StartSound: *vo/k_lab/eli_phenom02.wav not precached (0)
[server] Blocking load of scene from 'scenes/Expressions/BarneyIdle.vcd'
[server] error in transition graph: walking to standing
[engine] SV_StartSound: *vo/k_lab/eli_bringthrough.wav not precached (0)
[engine] SV_StartSound: *vo/k_lab/eli_bringthrough.wav not precached (0)
[server] (394.93) output: (logic_choreographed_scene,teleport_02_scene) -> (teleport_02a_scene,Start)()
[server] (394.93) output: (logic_choreographed_scene,teleport_02_scene) -> (player_final_sequence_rl,Enable)()
[server] (394.93) input teleport_02_scene: teleport_02a_scene.Start()
```
Might be related to #567?
### To Reproduce
1. Set up an env_microphone + npc and env_speaker combination in different isolated parts of the bsp/vmf
2. Trigger a vcd
Repro video:
https://user-images.githubusercontent.com/15126754/193900988-62ce3a89-6c5e-4cd1-a07a-551006c026cc.mp4
### Issue Map
Any map with an env_microphone and env_speaker, e.g.: d1_trainstation_01, d3_breen_01 or ep2_outland_11b
### Expected Behavior
The audio should be playing at the env_speaker origin properly
### Operating System
_No response_ | 1.0 | Bug: env_speaker/env_microphone does not work properly with npcs - ### Describe the bug
Audio that originates from NPCs will not be able to come through the respective env_speaker, even with a proper soundscript setup. Precache warnings will also spew through the console:
```
[server] (379.53) output: (func_door,kleiner_teleport_lift_1) -> (trigger_teleportdoor_alyx,Kill)()
[server] (379.53) input kleiner_teleport_lift_1: trigger_teleportdoor_alyx.Kill()
[engine] SV_StartSound: *vo/k_lab/eli_phenom02.wav not precached (0)
[engine] SV_StartSound: *vo/k_lab/eli_phenom02.wav not precached (0)
[server] Blocking load of scene from 'scenes/Expressions/BarneyIdle.vcd'
[server] error in transition graph: walking to standing
[engine] SV_StartSound: *vo/k_lab/eli_bringthrough.wav not precached (0)
[engine] SV_StartSound: *vo/k_lab/eli_bringthrough.wav not precached (0)
[server] (394.93) output: (logic_choreographed_scene,teleport_02_scene) -> (teleport_02a_scene,Start)()
[server] (394.93) output: (logic_choreographed_scene,teleport_02_scene) -> (player_final_sequence_rl,Enable)()
[server] (394.93) input teleport_02_scene: teleport_02a_scene.Start()
```
Might be related to #567?
### To Reproduce
1. Set up an env_microphone + npc and env_speaker combination in different isolated parts of the bsp/vmf
2. Trigger a vcd
Repro video:
https://user-images.githubusercontent.com/15126754/193900988-62ce3a89-6c5e-4cd1-a07a-551006c026cc.mp4
### Issue Map
Any map with an env_microphone and env_speaker, e.g.: d1_trainstation_01, d3_breen_01 or ep2_outland_11b
### Expected Behavior
The audio should be playing at the env_speaker origin properly
### Operating System
_No response_ | priority | bug env speaker env microphone does not work properly with npcs describe the bug audio that originates from npcs will not be able to come through the respective env speaker even with a proper soundscript setup precache warnings will also spew through the console output func door kleiner teleport lift trigger teleportdoor alyx kill input kleiner teleport lift trigger teleportdoor alyx kill sv startsound vo k lab eli wav not precached sv startsound vo k lab eli wav not precached blocking load of scene from scenes expressions barneyidle vcd error in transition graph walking to standing sv startsound vo k lab eli bringthrough wav not precached sv startsound vo k lab eli bringthrough wav not precached output logic choreographed scene teleport scene teleport scene start output logic choreographed scene teleport scene player final sequence rl enable input teleport scene teleport scene start might be related to to reproduce set up an env microphone npc and env speaker combination in different isolated parts of the bsp vmf trigger a vcd repro video issue map any map with an env microphone and env speaker e g trainstation breen or outland expected behavior the audio should be playing at the env speaker origin properly operating system no response | 1 |
535,890 | 15,700,464,757 | IssuesEvent | 2021-03-26 09:53:40 | sopra-fs21-group-16/mth-client | https://api.github.com/repos/sopra-fs21-group-16/mth-client | opened | Create summary screen (scheduling) | medium priority task | Create a summary screen for the completed scheduling session #6, where you can see the choosen activity, location and date. | 1.0 | Create summary screen (scheduling) - Create a summary screen for the completed scheduling session #6, where you can see the choosen activity, location and date. | priority | create summary screen scheduling create a summary screen for the completed scheduling session where you can see the choosen activity location and date | 1 |
539,036 | 15,782,477,187 | IssuesEvent | 2021-04-01 12:51:06 | AY2021S2-CS2113T-F08-1/tp | https://api.github.com/repos/AY2021S2-CS2113T-F08-1/tp | closed | Add Storage Class | priority.Medium type.Story | As a TA, I can save and load the modules and module information so that I can access the information across different sessions. | 1.0 | Add Storage Class - As a TA, I can save and load the modules and module information so that I can access the information across different sessions. | priority | add storage class as a ta i can save and load the modules and module information so that i can access the information across different sessions | 1 |
613,426 | 19,090,001,672 | IssuesEvent | 2021-11-29 11:00:23 | canonical-web-and-design/snapcraft.io | https://api.github.com/repos/canonical-web-and-design/snapcraft.io | closed | No button margins between 1366px and 774px on brand store invites table | Priority: Medium | > Members tab, invites; between 1366px - 774px the resend + revoke buttons don't have any margin applied so they literally stack on top of each other, this will effect some laptop and tablet landscape users | 1.0 | No button margins between 1366px and 774px on brand store invites table - > Members tab, invites; between 1366px - 774px the resend + revoke buttons don't have any margin applied so they literally stack on top of each other, this will effect some laptop and tablet landscape users | priority | no button margins between and on brand store invites table members tab invites between the resend revoke buttons don t have any margin applied so they literally stack on top of each other this will effect some laptop and tablet landscape users | 1 |
363,715 | 10,746,544,425 | IssuesEvent | 2019-10-30 11:16:21 | canonical-web-and-design/vanilla-framework | https://api.github.com/repos/canonical-web-and-design/vanilla-framework | closed | Docs for icons missing accessibility feature | Priority: Medium | For accessibility purposes, you can add text within an icon that will not be displayed.
e.g.
`<i className="p-icon--contextual-menu">This text will not be displayed</i>`
This feature is not documented here: https://docs.vanillaframework.io/patterns/icons/ | 1.0 | Docs for icons missing accessibility feature - For accessibility purposes, you can add text within an icon that will not be displayed.
e.g.
`<i className="p-icon--contextual-menu">This text will not be displayed</i>`
This feature is not documented here: https://docs.vanillaframework.io/patterns/icons/ | priority | docs for icons missing accessibility feature for accessibility purposes you can add text within an icon that will not be displayed e g this text will not be displayed this feature is not documented here | 1 |
588,933 | 17,686,088,333 | IssuesEvent | 2021-08-24 01:54:45 | hackforla/design-systems | https://api.github.com/repos/hackforla/design-systems | closed | Create logo for the Design System project | Role: UI priority: medium size: small | ### Overview
Create a logo for this project so that it can be used across the website and documentation
### Action Items
- [x] Find out the typeface that is in the Hack for LA logo.
- [x] Change the mockup logo with the new typeface, reference lock up example below.
- [x] Create one version using #030D2D as the typeface colour
- [x] Create one version using #FFFFFF as the typeface colour
<img width="473" alt="Screenshot 2021-07-16 at 16 39 36" src="https://user-images.githubusercontent.com/6236085/126018081-2aa454aa-a7e9-48f4-9ff0-373b7ee99084.png">
Create another layout similar to the following, ignore the kerning, it's just an example.
<img width="1236" alt="Screenshot 2021-07-16 at 16 39 54" src="https://user-images.githubusercontent.com/6236085/126018137-3c9a0cf5-3885-415e-a72e-a132845ad5b9.png">
### Resources/Instructions
[Figma link](https://www.figma.com/file/ly2kOpJc98oPbSIc181F2l/HfLA-Design-Systems?node-id=968%3A4146)
| 1.0 | Create logo for the Design System project - ### Overview
Create a logo for this project so that it can be used across the website and documentation
### Action Items
- [x] Find out the typeface that is in the Hack for LA logo.
- [x] Change the mockup logo with the new typeface, reference lock up example below.
- [x] Create one version using #030D2D as the typeface colour
- [x] Create one version using #FFFFFF as the typeface colour
<img width="473" alt="Screenshot 2021-07-16 at 16 39 36" src="https://user-images.githubusercontent.com/6236085/126018081-2aa454aa-a7e9-48f4-9ff0-373b7ee99084.png">
Create another layout similar to the following, ignore the kerning, it's just an example.
<img width="1236" alt="Screenshot 2021-07-16 at 16 39 54" src="https://user-images.githubusercontent.com/6236085/126018137-3c9a0cf5-3885-415e-a72e-a132845ad5b9.png">
### Resources/Instructions
[Figma link](https://www.figma.com/file/ly2kOpJc98oPbSIc181F2l/HfLA-Design-Systems?node-id=968%3A4146)
| priority | create logo for the design system project overview create a logo for this project so that it can be used across the website and documentation action items find out the typeface that is in the hack for la logo change the mockup logo with the new typeface reference lock up example below create one version using as the typeface colour create one version using ffffff as the typeface colour img width alt screenshot at src create another layout similar to the following ignore the kerning it s just an example img width alt screenshot at src resources instructions | 1 |
293,473 | 8,996,229,101 | IssuesEvent | 2019-02-02 00:11:23 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | Minting abuse | Medium Priority | I've witnessed person who was minting his coal ore just because it was easier than storing it somewhere.
That looks like an abuse of a system to me. what do you think ? | 1.0 | Minting abuse - I've witnessed person who was minting his coal ore just because it was easier than storing it somewhere.
That looks like an abuse of a system to me. what do you think ? | priority | minting abuse i ve witnessed person who was minting his coal ore just because it was easier than storing it somewhere that looks like an abuse of a system to me what do you think | 1 |
4,799 | 2,564,058,509 | IssuesEvent | 2015-02-06 17:13:51 | osulp/primo | https://api.github.com/repos/osulp/primo | opened | Create a permalink for basic searches | Medium Priority | To use in LibGuides and for troubleshooting issues with patrons via chat, it would be very helpful to have a permalink to a particular search. Western Washington U has this:
http://onesearch.library.wwu.edu/primo_library/libweb/action/dlSearch.do?institution=WWU&vid=WWU&tab=default&mode=Basic&group=GUEST&onCampus=true&displayMode=full&highlight=true&displayField=all&search_scope=All&&query=any,contains,stuff
There's a link below the search box that says "Permalink for this search: http://library.wwu.edu/onesearch/stuff"
Looks like there is a mechanism on the server to rewrite the permalink into the basic search keyword and the scope. For example, if you specify a scope other than the default it's appended to the end of the URL: http://library.wwu.edu/onesearch/stuff/wwu_only | 1.0 | Create a permalink for basic searches - To use in LibGuides and for troubleshooting issues with patrons via chat, it would be very helpful to have a permalink to a particular search. Western Washington U has this:
http://onesearch.library.wwu.edu/primo_library/libweb/action/dlSearch.do?institution=WWU&vid=WWU&tab=default&mode=Basic&group=GUEST&onCampus=true&displayMode=full&highlight=true&displayField=all&search_scope=All&&query=any,contains,stuff
There's a link below the search box that says "Permalink for this search: http://library.wwu.edu/onesearch/stuff"
Looks like there is a mechanism on the server to rewrite the permalink into the basic search keyword and the scope. For example, if you specify a scope other than the default it's appended to the end of the URL: http://library.wwu.edu/onesearch/stuff/wwu_only | priority | create a permalink for basic searches to use in libguides and for troubleshooting issues with patrons via chat it would be very helpful to have a permalink to a particular search western washington u has this there s a link below the search box that says permalink for this search looks like there is a mechanism on the server to rewrite the permalink into the basic search keyword and the scope for example if you specify a scope other than the default it s appended to the end of the url | 1 |
655,684 | 21,705,074,358 | IssuesEvent | 2022-05-10 08:52:56 | Amulet-Team/Amulet-Map-Editor | https://api.github.com/repos/Amulet-Team/Amulet-Map-Editor | opened | [Feature Request] Lock config files with portalocker | type: enhancement priority: medium | ## Feature Request
### The Problem
The current config file implementation loads the file, modifies the loaded data and then writes it back.
This could lead to race conditions across threads and processes.
### Feature Description
The get/set config functions should be replaced with a context manager function that locks the config file and loads and returns an object. When the context exits it writes the object back and releases the file.
This would allow the same config file to be edited by multiple processes to share the same config data.
```py
from typing import Optional, BinaryIO
import pickle
import portalocker
class ConfigManager:
def __init__(self, path: str):
self._path = path
self._handler: Optional[BinaryIO] = None
self._data = {}
def __enter__(self) -> dict:
if self._handler is not None:
raise Exception("with statement can only be used once on ConfigManager")
self._handler = open(self._path, "ab+")
self._handler.seek(0)
portalocker.lock(self._handler, portalocker.LockFlags.EXCLUSIVE)
try:
self._data = pickle.load(self._handler)
except:
pass
return self._data
def __exit__(self, exc_type, exc_val, exc_tb):
self._handler.truncate()
self._handler.seek(0)
pickle.dump(self._data, self._handler)
portalocker.unlock(self._handler)
self._handler.close()
def test():
with ConfigManager("test") as cfg:
assert isinstance(cfg, dict)
cfg["test_val"] = 5
with ConfigManager("test") as cfg:
assert isinstance(cfg, dict)
assert cfg["test_val"] == 5
if __name__ == '__main__':
test()
```
| 1.0 | [Feature Request] Lock config files with portalocker - ## Feature Request
### The Problem
The current config file implementation loads the file, modifies the loaded data and then writes it back.
This could lead to race conditions across threads and processes.
### Feature Description
The get/set config functions should be replaced with a context manager function that locks the config file and loads and returns an object. When the context exits it writes the object back and releases the file.
This would allow the same config file to be edited by multiple processes to share the same config data.
```py
from typing import Optional, BinaryIO
import pickle
import portalocker
class ConfigManager:
def __init__(self, path: str):
self._path = path
self._handler: Optional[BinaryIO] = None
self._data = {}
def __enter__(self) -> dict:
if self._handler is not None:
raise Exception("with statement can only be used once on ConfigManager")
self._handler = open(self._path, "ab+")
self._handler.seek(0)
portalocker.lock(self._handler, portalocker.LockFlags.EXCLUSIVE)
try:
self._data = pickle.load(self._handler)
except:
pass
return self._data
def __exit__(self, exc_type, exc_val, exc_tb):
self._handler.truncate()
self._handler.seek(0)
pickle.dump(self._data, self._handler)
portalocker.unlock(self._handler)
self._handler.close()
def test():
with ConfigManager("test") as cfg:
assert isinstance(cfg, dict)
cfg["test_val"] = 5
with ConfigManager("test") as cfg:
assert isinstance(cfg, dict)
assert cfg["test_val"] == 5
if __name__ == '__main__':
test()
```
| priority | lock config files with portalocker feature request the problem the current config file implementation loads the file modifies the loaded data and then writes it back this could lead to race conditions across threads and processes feature description the get set config functions should be replaced with a context manager function that locks the config file and loads and returns an object when the context exits it writes the object back and releases the file this would allow the same config file to be edited by multiple processes to share the same config data py from typing import optional binaryio import pickle import portalocker class configmanager def init self path str self path path self handler optional none self data def enter self dict if self handler is not none raise exception with statement can only be used once on configmanager self handler open self path ab self handler seek portalocker lock self handler portalocker lockflags exclusive try self data pickle load self handler except pass return self data def exit self exc type exc val exc tb self handler truncate self handler seek pickle dump self data self handler portalocker unlock self handler self handler close def test with configmanager test as cfg assert isinstance cfg dict cfg with configmanager test as cfg assert isinstance cfg dict assert cfg if name main test | 1 |
792,713 | 27,972,258,802 | IssuesEvent | 2023-03-25 06:16:44 | renovatebot/renovate | https://api.github.com/repos/renovatebot/renovate | closed | Fully specified monorepos should do an exact match | type:bug priority-3-medium good first issue status:ready reproduction:confirmed | ### How are you running Renovate?
WhiteSource Renovate hosted app on github.com
### If you're self-hosting Renovate, tell us what version of Renovate you run.
_No response_
### Please select which platform you are using if self-hosting.
_No response_
### If you're self-hosting Renovate, tell us what version of the platform you run.
_No response_
### Was this something which used to work for you, and then stopped?
I never saw this working
### Describe the bug
`react-native` is grouped with `react` packages in "react monorepo" mistakenly. This is because `https://github.com/facebook/react` is a prefix of `https://github.com/facebook/react-native`.
Reproduction: https://github.com/renovate-tests/npm96
### Relevant debug logs
<details><summary>Logs</summary>
```
Copy/paste any log here, between the starting and ending backticks
```
</details>
### Have you created a minimal reproduction repository?
I have linked to a minimal reproduction repository in the bug description | 1.0 | Fully specified monorepos should do an exact match - ### How are you running Renovate?
WhiteSource Renovate hosted app on github.com
### If you're self-hosting Renovate, tell us what version of Renovate you run.
_No response_
### Please select which platform you are using if self-hosting.
_No response_
### If you're self-hosting Renovate, tell us what version of the platform you run.
_No response_
### Was this something which used to work for you, and then stopped?
I never saw this working
### Describe the bug
`react-native` is grouped with `react` packages in "react monorepo" mistakenly. This is because `https://github.com/facebook/react` is a prefix of `https://github.com/facebook/react-native`.
Reproduction: https://github.com/renovate-tests/npm96
### Relevant debug logs
<details><summary>Logs</summary>
```
Copy/paste any log here, between the starting and ending backticks
```
</details>
### Have you created a minimal reproduction repository?
I have linked to a minimal reproduction repository in the bug description | priority | fully specified monorepos should do an exact match how are you running renovate whitesource renovate hosted app on github com if you re self hosting renovate tell us what version of renovate you run no response please select which platform you are using if self hosting no response if you re self hosting renovate tell us what version of the platform you run no response was this something which used to work for you and then stopped i never saw this working describe the bug react native is grouped with react packages in react monorepo mistakenly this is because is a prefix of reproduction relevant debug logs logs copy paste any log here between the starting and ending backticks have you created a minimal reproduction repository i have linked to a minimal reproduction repository in the bug description | 1 |
569,469 | 17,014,653,700 | IssuesEvent | 2021-07-02 10:12:49 | Energinet-DataHub/MightyDucks | https://api.github.com/repos/Energinet-DataHub/MightyDucks | closed | Help with Creating the Shared Resources Integration point Servicebus namespace | MightyDucks Priority: Medium | <img width="837" alt="Screenshot 2021-06-21 at 12 18 29" src="https://user-images.githubusercontent.com/3541298/122746937-cbb6b280-d28a-11eb-8260-2aa235e78fe0.png">
| 1.0 | Help with Creating the Shared Resources Integration point Servicebus namespace - <img width="837" alt="Screenshot 2021-06-21 at 12 18 29" src="https://user-images.githubusercontent.com/3541298/122746937-cbb6b280-d28a-11eb-8260-2aa235e78fe0.png">
| priority | help with creating the shared resources integration point servicebus namespace img width alt screenshot at src | 1 |
735,354 | 25,390,912,804 | IssuesEvent | 2022-11-22 03:47:51 | momentum-mod/game | https://api.github.com/repos/momentum-mod/game | closed | Panorama FPS meter HUD | Type: Enhancement Priority: Medium Size: Small | ### What feature is your improvement idea related to?
Panorama port
### Describe the solution you'd like
Seeing this a lot in the Zeplin but hasn't been talked about.
### Describe alternatives you've considered, if any.
_No response_
### Additional context
I think this is an engine panel? Not required for 0.9.1 but would be nice. | 1.0 | Panorama FPS meter HUD - ### What feature is your improvement idea related to?
Panorama port
### Describe the solution you'd like
Seeing this a lot in the Zeplin but hasn't been talked about.
### Describe alternatives you've considered, if any.
_No response_
### Additional context
I think this is an engine panel? Not required for 0.9.1 but would be nice. | priority | panorama fps meter hud what feature is your improvement idea related to panorama port describe the solution you d like seeing this a lot in the zeplin but hasn t been talked about describe alternatives you ve considered if any no response additional context i think this is an engine panel not required for but would be nice | 1 |
676,414 | 23,124,416,677 | IssuesEvent | 2022-07-28 03:04:43 | cub42d/cub3d | https://api.github.com/repos/cub42d/cub3d | closed | [To do - Subtask] Tuto : The texture - texture by direction, luminosity etc. | Status: In Progress Priority: Medium | ## Description
강중빈 멘토님 튜토리얼 내 '텍스쳐 입히기'의 하위 작업들 입니다.
## Progress
- [x] 동, 서, 남, 북에 따른 각각 다른 텍스쳐 로딩
## ETC
현재 ray_casting_test 브랜치 내에 작성한 코드는 벽돌 텍스쳐만 사용합니다.
그 텍스쳐를 위해 mlx 픽셀에 관련된 변수(bit per pixle, line_len, endian)를 선언해 주었는데, 각각의 텍스쳐 마다 변수를 주어야 한다면 비효율적일것 같아서 아예 텍스쳐만 저장할 배열을 생성하고 받은 이미지를 제거(mlx_image_destory)하는게 낫다는 같다는 생각이 들었습니다.
| 1.0 | [To do - Subtask] Tuto : The texture - texture by direction, luminosity etc. - ## Description
강중빈 멘토님 튜토리얼 내 '텍스쳐 입히기'의 하위 작업들 입니다.
## Progress
- [x] 동, 서, 남, 북에 따른 각각 다른 텍스쳐 로딩
## ETC
현재 ray_casting_test 브랜치 내에 작성한 코드는 벽돌 텍스쳐만 사용합니다.
그 텍스쳐를 위해 mlx 픽셀에 관련된 변수(bit per pixle, line_len, endian)를 선언해 주었는데, 각각의 텍스쳐 마다 변수를 주어야 한다면 비효율적일것 같아서 아예 텍스쳐만 저장할 배열을 생성하고 받은 이미지를 제거(mlx_image_destory)하는게 낫다는 같다는 생각이 들었습니다.
| priority | tuto the texture texture by direction luminosity etc description 강중빈 멘토님 튜토리얼 내 텍스쳐 입히기 의 하위 작업들 입니다 progress 동 서 남 북에 따른 각각 다른 텍스쳐 로딩 etc 현재 ray casting test 브랜치 내에 작성한 코드는 벽돌 텍스쳐만 사용합니다 그 텍스쳐를 위해 mlx 픽셀에 관련된 변수 bit per pixle line len endian 를 선언해 주었는데 각각의 텍스쳐 마다 변수를 주어야 한다면 비효율적일것 같아서 아예 텍스쳐만 저장할 배열을 생성하고 받은 이미지를 제거 mlx image destory 하는게 낫다는 같다는 생각이 들었습니다 | 1 |
591,132 | 17,795,456,576 | IssuesEvent | 2021-08-31 21:29:48 | UCSD-E4E/Pyrenote | https://api.github.com/repos/UCSD-E4E/Pyrenote | closed | Hide Annotations Toggle Feature | dev: enhancement priority: medium | Would be nice to have some sort of checkbox that would enable a user to turn off all annotations temporarily. This would be handy in situations where there might be a constant vocalization of interest occurring throughout the entirety of a clip as well as shorter vocalizations of interest throughout. This could maybe come in the form of being able to hide annotations of a certain class. I thought of this while listening to a clip from the Scripps Coastal Reserve where there was the sound of an airplane throughout the clip, would be nice to annotate that, then hide it to get it out of the way if I wanted to annotate bird vocalizations that occurred as well. Similar situation can happen with Howler Monkey vocalizations that usually occur throughout a clip, that causes birds to oftentimes vocalize as well. | 1.0 | Hide Annotations Toggle Feature - Would be nice to have some sort of checkbox that would enable a user to turn off all annotations temporarily. This would be handy in situations where there might be a constant vocalization of interest occurring throughout the entirety of a clip as well as shorter vocalizations of interest throughout. This could maybe come in the form of being able to hide annotations of a certain class. I thought of this while listening to a clip from the Scripps Coastal Reserve where there was the sound of an airplane throughout the clip, would be nice to annotate that, then hide it to get it out of the way if I wanted to annotate bird vocalizations that occurred as well. Similar situation can happen with Howler Monkey vocalizations that usually occur throughout a clip, that causes birds to oftentimes vocalize as well. | priority | hide annotations toggle feature would be nice to have some sort of checkbox that would enable a user to turn off all annotations temporarily this would be handy in situations where there might be a constant vocalization of interest occurring throughout the entirety of a clip as well as shorter vocalizations of interest throughout this could maybe come in the form of being able to hide annotations of a certain class i thought of this while listening to a clip from the scripps coastal reserve where there was the sound of an airplane throughout the clip would be nice to annotate that then hide it to get it out of the way if i wanted to annotate bird vocalizations that occurred as well similar situation can happen with howler monkey vocalizations that usually occur throughout a clip that causes birds to oftentimes vocalize as well | 1 |
251,786 | 8,027,302,285 | IssuesEvent | 2018-07-27 08:38:01 | Optiboot/optiboot | https://api.github.com/repos/Optiboot/optiboot | closed | Does it possible to call/ jump into boot loader from main loop ( for remote access)? | Priority-Medium Type-Enhancement auto-migrated | ```
What steps will reproduce the problem?
1.-
2.
3.
What is the expected output? What do you see instead?
-
What version of the product are you using? On what operating system?
AVRdude / Custom Made board / Win7
Please provide any additional information below.
For develop remote flash by serial link application.I try to jump from main loop to boot loader.Please comment.
```
Original issue reported on code.google.com by `[email protected]` on 20 Jan 2012 at 2:33
| 1.0 | Does it possible to call/ jump into boot loader from main loop ( for remote access)? - ```
What steps will reproduce the problem?
1.-
2.
3.
What is the expected output? What do you see instead?
-
What version of the product are you using? On what operating system?
AVRdude / Custom Made board / Win7
Please provide any additional information below.
For develop remote flash by serial link application.I try to jump from main loop to boot loader.Please comment.
```
Original issue reported on code.google.com by `[email protected]` on 20 Jan 2012 at 2:33
| priority | does it possible to call jump into boot loader from main loop for remote access what steps will reproduce the problem what is the expected output what do you see instead what version of the product are you using on what operating system avrdude custom made board please provide any additional information below for develop remote flash by serial link application i try to jump from main loop to boot loader please comment original issue reported on code google com by tsupa gmail com on jan at | 1 |
789,877 | 27,808,475,267 | IssuesEvent | 2023-03-17 23:00:31 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [YSQL] ALTER TABLE fails constraints if executed in the same connection | kind/bug area/ysql priority/medium status/awaiting-triage | Jira Link: [DB-5878](https://yugabyte.atlassian.net/browse/DB-5878)
### Description
In YB master, executing the following sequence of statements will fail
```SQL
yugabyte=# CREATE TABLE nopk ( id int CHECK (id > 0), v1 int CHECK (v1 > 0));
CREATE TABLE
yugabyte=# ALTER TABLE nopk ADD PRIMARY KEY (id);
ERROR: duplicate key value violates unique constraint "pg_constraint_conrelid_contypid_conname_index"
```
while the following sequence of statement can succeed
```
yugabyte=# CREATE TABLE nopk ( id int CHECK (id > 0));
CREATE TABLE
yugabyte=# ALTER TABLE nopk ADD PRIMARY KEY (id);
ALTER TABLE
```
```
yugabyte=# CREATE TABLE nopk ( id int CHECK (id > 0), v2 int NOT NULL);
CREATE TABLE
yugabyte=# ALTER TABLE nopk ADD PRIMARY KEY (id);
ALTER TABLE
```
Note: if the `ALTER` is executed in another connection, it can succeed as well
```
yugabyte=# CREATE TABLE nopk ( id int CHECK (id > 0), v1 int CHECK (v1 > 0));
CREATE TABLE
yugabyte=#
yugabyte=# ^D\q
<restart connection>
ssong@dev-server-ssong ~/c/yugabyte-db (fix-preload-mem-scan-context)> ysqlsh
ysqlsh (11.2-YB-2.17.2.0-b0)
Type "help" for help.
yugabyte=# ALTER TABLE nopk ADD PRIMARY KEY (id);
ALTER TABLE
```
In, PG the same sequence of statements can succeed
```
postgres=# CREATE TABLE nopk ( id int CHECK (id > 0), v1 int CHECK (v1 > 0));
CREATE TABLE
postgres=# ALTER TABLE nopk ADD PRIMARY KEY (id);
ALTER TABLE
```
### Warning: Please confirm that this issue does not contain any sensitive information
- [X] I confirm this issue does not contain any sensitive information.
[DB-5878]: https://yugabyte.atlassian.net/browse/DB-5878?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | 1.0 | [YSQL] ALTER TABLE fails constraints if executed in the same connection - Jira Link: [DB-5878](https://yugabyte.atlassian.net/browse/DB-5878)
### Description
In YB master, executing the following sequence of statements will fail
```SQL
yugabyte=# CREATE TABLE nopk ( id int CHECK (id > 0), v1 int CHECK (v1 > 0));
CREATE TABLE
yugabyte=# ALTER TABLE nopk ADD PRIMARY KEY (id);
ERROR: duplicate key value violates unique constraint "pg_constraint_conrelid_contypid_conname_index"
```
while the following sequence of statement can succeed
```
yugabyte=# CREATE TABLE nopk ( id int CHECK (id > 0));
CREATE TABLE
yugabyte=# ALTER TABLE nopk ADD PRIMARY KEY (id);
ALTER TABLE
```
```
yugabyte=# CREATE TABLE nopk ( id int CHECK (id > 0), v2 int NOT NULL);
CREATE TABLE
yugabyte=# ALTER TABLE nopk ADD PRIMARY KEY (id);
ALTER TABLE
```
Note: if the `ALTER` is executed in another connection, it can succeed as well
```
yugabyte=# CREATE TABLE nopk ( id int CHECK (id > 0), v1 int CHECK (v1 > 0));
CREATE TABLE
yugabyte=#
yugabyte=# ^D\q
<restart connection>
ssong@dev-server-ssong ~/c/yugabyte-db (fix-preload-mem-scan-context)> ysqlsh
ysqlsh (11.2-YB-2.17.2.0-b0)
Type "help" for help.
yugabyte=# ALTER TABLE nopk ADD PRIMARY KEY (id);
ALTER TABLE
```
In, PG the same sequence of statements can succeed
```
postgres=# CREATE TABLE nopk ( id int CHECK (id > 0), v1 int CHECK (v1 > 0));
CREATE TABLE
postgres=# ALTER TABLE nopk ADD PRIMARY KEY (id);
ALTER TABLE
```
### Warning: Please confirm that this issue does not contain any sensitive information
- [X] I confirm this issue does not contain any sensitive information.
[DB-5878]: https://yugabyte.atlassian.net/browse/DB-5878?atlOrigin=eyJpIjoiNWRkNTljNzYxNjVmNDY3MDlhMDU5Y2ZhYzA5YTRkZjUiLCJwIjoiZ2l0aHViLWNvbS1KU1cifQ | priority | alter table fails constraints if executed in the same connection jira link description in yb master executing the following sequence of statements will fail sql yugabyte create table nopk id int check id int check create table yugabyte alter table nopk add primary key id error duplicate key value violates unique constraint pg constraint conrelid contypid conname index while the following sequence of statement can succeed yugabyte create table nopk id int check id create table yugabyte alter table nopk add primary key id alter table yugabyte create table nopk id int check id int not null create table yugabyte alter table nopk add primary key id alter table note if the alter is executed in another connection it can succeed as well yugabyte create table nopk id int check id int check create table yugabyte yugabyte d q ssong dev server ssong c yugabyte db fix preload mem scan context ysqlsh ysqlsh yb type help for help yugabyte alter table nopk add primary key id alter table in pg the same sequence of statements can succeed postgres create table nopk id int check id int check create table postgres alter table nopk add primary key id alter table warning please confirm that this issue does not contain any sensitive information i confirm this issue does not contain any sensitive information | 1 |
244,340 | 7,874,036,230 | IssuesEvent | 2018-06-25 15:49:50 | cms-gem-daq-project/gem-plotting-tools | https://api.github.com/repos/cms-gem-daq-project/gem-plotting-tools | opened | Feature Request: 2D Map of Detector Scurve Width | Priority: Medium Status: Help Wanted Type: Enhancement | <!--- Provide a general summary of the issue in the Title above -->
## Brief summary of issue
<!--- Provide a description of the issue, including any other issues or pull requests it references -->
To better correlate channel loss with physical location on the detector a new distribution is needed. It should be a `TH2F` which has on the y-axis `ieta` and on the x-axis `strip`. Here `strip` should go as 0 to 383. The z-axis should be the scurve width.
However additional distributions may be of interest in the future so we should add a function to [anautilities.py](https://github.com/cms-gem-daq-project/gem-plotting-tools/blob/develop/anautilities.py). This function should take as input either a dictionary or a multidimensional numpy array. In either case something of the form of:
```
def makeDetectorMap(inputContainer):
"""
inputContainer - container where inputContainer[vfat][ROBstr] is the observable of interest for (vfat, ROBstr) ordered pair
"""
#initialize some TH2F object called hDetectorMap
#Loop over inputContainer[vfat][ROBstr]
#Get ieta position corresponding to (vfat, ROBstr) using chamber_vfatPos2iEta
#Determine binX and binY of hDetectorMap that corresponds to (vfat, ROBstr)
#Use the TH2F::SetBinContent() method to set inputContainer[vfat][ROBstr] to (binX,binY)
return hDetectorMap
```
Where is imported from `chamber_vfatPos2iEta` comes from [chamberInfo Line 72](https://github.com/cms-gem-daq-project/gem-plotting-tools/blob/3f1d85dedc8963f72467b1d253112b3f8d57aa04/mapping/chamberInfo.py#L72)
Then add making a detector map of scurve width in `anaUltraScurve.py`
### Types of issue
<!--- Propsed labels (see CONTRIBUTING.md) to help maintainers label your issue: -->
- [ ] Bug report (report an issue with the code)
- [X] Feature request (request for change which adds functionality)
## Expected Behavior
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
We should make a physical map of scurve width as a single 2D plot across the detector in `anaUltraScurve.py`
## Current Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
See above pseudocode.
## Context (for feature requests)
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
Will help us understand the nature of the channel loss.
<!--- Template thanks to https://www.talater.com/open-source-templates/#/page/98 -->
| 1.0 | Feature Request: 2D Map of Detector Scurve Width - <!--- Provide a general summary of the issue in the Title above -->
## Brief summary of issue
<!--- Provide a description of the issue, including any other issues or pull requests it references -->
To better correlate channel loss with physical location on the detector a new distribution is needed. It should be a `TH2F` which has on the y-axis `ieta` and on the x-axis `strip`. Here `strip` should go as 0 to 383. The z-axis should be the scurve width.
However additional distributions may be of interest in the future so we should add a function to [anautilities.py](https://github.com/cms-gem-daq-project/gem-plotting-tools/blob/develop/anautilities.py). This function should take as input either a dictionary or a multidimensional numpy array. In either case something of the form of:
```
def makeDetectorMap(inputContainer):
"""
inputContainer - container where inputContainer[vfat][ROBstr] is the observable of interest for (vfat, ROBstr) ordered pair
"""
#initialize some TH2F object called hDetectorMap
#Loop over inputContainer[vfat][ROBstr]
#Get ieta position corresponding to (vfat, ROBstr) using chamber_vfatPos2iEta
#Determine binX and binY of hDetectorMap that corresponds to (vfat, ROBstr)
#Use the TH2F::SetBinContent() method to set inputContainer[vfat][ROBstr] to (binX,binY)
return hDetectorMap
```
Where is imported from `chamber_vfatPos2iEta` comes from [chamberInfo Line 72](https://github.com/cms-gem-daq-project/gem-plotting-tools/blob/3f1d85dedc8963f72467b1d253112b3f8d57aa04/mapping/chamberInfo.py#L72)
Then add making a detector map of scurve width in `anaUltraScurve.py`
### Types of issue
<!--- Propsed labels (see CONTRIBUTING.md) to help maintainers label your issue: -->
- [ ] Bug report (report an issue with the code)
- [X] Feature request (request for change which adds functionality)
## Expected Behavior
<!--- If you're describing a bug, tell us what should happen -->
<!--- If you're suggesting a change/improvement, tell us how it should work -->
We should make a physical map of scurve width as a single 2D plot across the detector in `anaUltraScurve.py`
## Current Behavior
<!--- If describing a bug, tell us what happens instead of the expected behavior -->
<!--- If suggesting a change/improvement, explain the difference from current behavior -->
See above pseudocode.
## Context (for feature requests)
<!--- How has this issue affected you? What are you trying to accomplish? -->
<!--- Providing context helps us come up with a solution that is most useful in the real world -->
Will help us understand the nature of the channel loss.
<!--- Template thanks to https://www.talater.com/open-source-templates/#/page/98 -->
| priority | feature request map of detector scurve width brief summary of issue to better correlate channel loss with physical location on the detector a new distribution is needed it should be a which has on the y axis ieta and on the x axis strip here strip should go as to the z axis should be the scurve width however additional distributions may be of interest in the future so we should add a function to this function should take as input either a dictionary or a multidimensional numpy array in either case something of the form of def makedetectormap inputcontainer inputcontainer container where inputcontainer is the observable of interest for vfat robstr ordered pair initialize some object called hdetectormap loop over inputcontainer get ieta position corresponding to vfat robstr using chamber determine binx and biny of hdetectormap that corresponds to vfat robstr use the setbincontent method to set inputcontainer to binx biny return hdetectormap where is imported from chamber comes from then add making a detector map of scurve width in anaultrascurve py types of issue bug report report an issue with the code feature request request for change which adds functionality expected behavior we should make a physical map of scurve width as a single plot across the detector in anaultrascurve py current behavior see above pseudocode context for feature requests will help us understand the nature of the channel loss | 1 |
340,522 | 10,273,144,534 | IssuesEvent | 2019-08-23 18:26:24 | fgpv-vpgf/fgpv-vpgf | https://api.github.com/repos/fgpv-vpgf/fgpv-vpgf | closed | Grid number filters dont allow negative symbol | bug-type: broken use case priority: medium problem: bug type: corrective | A user cannot type the `-` symbol in a min/max filter box in the data grid. However you can compose a negative number in `notepad.exe` and then paste it into the filter box ok.
[This service](http://section917.cloudapp.net/arcgis/rest/services/TestData/Oilsands/MapServer/0) has negative numbers in the `longitude` field | 1.0 | Grid number filters dont allow negative symbol - A user cannot type the `-` symbol in a min/max filter box in the data grid. However you can compose a negative number in `notepad.exe` and then paste it into the filter box ok.
[This service](http://section917.cloudapp.net/arcgis/rest/services/TestData/Oilsands/MapServer/0) has negative numbers in the `longitude` field | priority | grid number filters dont allow negative symbol a user cannot type the symbol in a min max filter box in the data grid however you can compose a negative number in notepad exe and then paste it into the filter box ok has negative numbers in the longitude field | 1 |
54,289 | 3,062,998,462 | IssuesEvent | 2015-08-17 02:16:01 | Miniand/brdg.me-issues | https://api.github.com/repos/Miniand/brdg.me-issues | closed | Simplify command backend | priority:medium project:server type:enhancement | It's currently too complex and poorly abstracted, which can complicate commands sitting on top of state machines (limiting the ability to support things like mid-game votes.) Things should also be cleaned up to support more strongly defined parsers which will hopefully pave the way for richer UIs for mobile / web.
Current plan:
- [x] Remove `CanCall` from `Command` interface
- [x] Remove `Parse` from `Command` interface
- [x] Add `Name` to command interface to simplify command matching
- [x] Make `Call` accept a plain `string` input for arguments which it can parse however the implementation deems fit
- [x] Add `player string` argument to `Commands` function of `Game` interface
- [x] Move call authorisation logic inside `Commands` function of `Game` interface
- [x] Remove `AvailableCommands` helper as it will no longer be required | 1.0 | Simplify command backend - It's currently too complex and poorly abstracted, which can complicate commands sitting on top of state machines (limiting the ability to support things like mid-game votes.) Things should also be cleaned up to support more strongly defined parsers which will hopefully pave the way for richer UIs for mobile / web.
Current plan:
- [x] Remove `CanCall` from `Command` interface
- [x] Remove `Parse` from `Command` interface
- [x] Add `Name` to command interface to simplify command matching
- [x] Make `Call` accept a plain `string` input for arguments which it can parse however the implementation deems fit
- [x] Add `player string` argument to `Commands` function of `Game` interface
- [x] Move call authorisation logic inside `Commands` function of `Game` interface
- [x] Remove `AvailableCommands` helper as it will no longer be required | priority | simplify command backend it s currently too complex and poorly abstracted which can complicate commands sitting on top of state machines limiting the ability to support things like mid game votes things should also be cleaned up to support more strongly defined parsers which will hopefully pave the way for richer uis for mobile web current plan remove cancall from command interface remove parse from command interface add name to command interface to simplify command matching make call accept a plain string input for arguments which it can parse however the implementation deems fit add player string argument to commands function of game interface move call authorisation logic inside commands function of game interface remove availablecommands helper as it will no longer be required | 1 |
636,370 | 20,598,476,032 | IssuesEvent | 2022-03-05 22:13:23 | mreishman/Log-Hog | https://api.github.com/repos/mreishman/Log-Hog | closed | Combine whats new and change log pages | enhancement Priority - 3 - Medium | - [x] Move whats new images into change log
- [x] change images into slideshow with click to go full screen | 1.0 | Combine whats new and change log pages - - [x] Move whats new images into change log
- [x] change images into slideshow with click to go full screen | priority | combine whats new and change log pages move whats new images into change log change images into slideshow with click to go full screen | 1 |
523,391 | 15,180,770,662 | IssuesEvent | 2021-02-15 01:11:43 | QuantEcon/lecture-python.myst | https://api.github.com/repos/QuantEcon/lecture-python.myst | closed | [lecture_comparison]math_size_in_headings | medium-priority | This is a minor issue that math expressions in headings is too small in ```MyST```. I wonder whether there is any solution for this.
The example is: (Left: ```RST```, Right: ```MyST```.)
![Screen Shot 2021-01-19 at 6 55 45 pm](https://user-images.githubusercontent.com/44494439/105004771-85b1b480-5a88-11eb-95ff-ad512e86809f.png)
cc: @mmcky
| 1.0 | [lecture_comparison]math_size_in_headings - This is a minor issue that math expressions in headings is too small in ```MyST```. I wonder whether there is any solution for this.
The example is: (Left: ```RST```, Right: ```MyST```.)
![Screen Shot 2021-01-19 at 6 55 45 pm](https://user-images.githubusercontent.com/44494439/105004771-85b1b480-5a88-11eb-95ff-ad512e86809f.png)
cc: @mmcky
| priority | math size in headings this is a minor issue that math expressions in headings is too small in myst i wonder whether there is any solution for this the example is left rst right myst cc mmcky | 1 |
720,475 | 24,794,231,594 | IssuesEvent | 2022-10-24 15:53:54 | OpenLiberty/liberty-tools-vscode | https://api.github.com/repos/OpenLiberty/liberty-tools-vscode | closed | Liberty: History action to save previous Start... choices | 3 medium priority GUI SVT-req | To avoid having to re-type the same custom parameters each run add a "History" option similar to VS Code for Maven to view past runs
<img width="989" alt="image" src="https://user-images.githubusercontent.com/26146482/186692064-e4191a2e-d107-4827-abd4-e79efce2738b.png">
<img width="878" alt="image" src="https://user-images.githubusercontent.com/26146482/189976999-17385f9a-92c2-4c82-83f6-d370db9a785c.png">
| 1.0 | Liberty: History action to save previous Start... choices - To avoid having to re-type the same custom parameters each run add a "History" option similar to VS Code for Maven to view past runs
<img width="989" alt="image" src="https://user-images.githubusercontent.com/26146482/186692064-e4191a2e-d107-4827-abd4-e79efce2738b.png">
<img width="878" alt="image" src="https://user-images.githubusercontent.com/26146482/189976999-17385f9a-92c2-4c82-83f6-d370db9a785c.png">
| priority | liberty history action to save previous start choices to avoid having to re type the same custom parameters each run add a history option similar to vs code for maven to view past runs img width alt image src img width alt image src | 1 |
91,884 | 3,863,516,844 | IssuesEvent | 2016-04-08 09:45:41 | iamxavier/elmah | https://api.github.com/repos/iamxavier/elmah | closed | SqlCompactErrorLog assumes that if the database file exists, it has the elmah tables | auto-migrated Component-Persistence Priority-Medium Type-Enhancement | ```
What steps will reproduce the problem?
1. Use elmah on a ASP.NET project that already uses SQL Server CE 4.0
2. Go to the elmah page
3. It fails.
What is the expected output? What do you see instead?
Expected: elmah page.
Instead: error.
What version of the product are you using? On what operating system?
elmah 1.2 beta
Please provide any additional information below.
The SqlCompactErrorLog.InitializeDatabase() method assumes (twice) that if the
database file exists, it must already have the elmah tables. This is not always
the case. So, the simplest alternative could be to verify that the table really
exists (once you verify that the database file exists), instead of assuming
that it does.
```
Original issue reported on code.google.com by `[email protected]` on 28 Apr 2011 at 8:15 | 1.0 | SqlCompactErrorLog assumes that if the database file exists, it has the elmah tables - ```
What steps will reproduce the problem?
1. Use elmah on a ASP.NET project that already uses SQL Server CE 4.0
2. Go to the elmah page
3. It fails.
What is the expected output? What do you see instead?
Expected: elmah page.
Instead: error.
What version of the product are you using? On what operating system?
elmah 1.2 beta
Please provide any additional information below.
The SqlCompactErrorLog.InitializeDatabase() method assumes (twice) that if the
database file exists, it must already have the elmah tables. This is not always
the case. So, the simplest alternative could be to verify that the table really
exists (once you verify that the database file exists), instead of assuming
that it does.
```
Original issue reported on code.google.com by `[email protected]` on 28 Apr 2011 at 8:15 | priority | sqlcompacterrorlog assumes that if the database file exists it has the elmah tables what steps will reproduce the problem use elmah on a asp net project that already uses sql server ce go to the elmah page it fails what is the expected output what do you see instead expected elmah page instead error what version of the product are you using on what operating system elmah beta please provide any additional information below the sqlcompacterrorlog initializedatabase method assumes twice that if the database file exists it must already have the elmah tables this is not always the case so the simplest alternative could be to verify that the table really exists once you verify that the database file exists instead of assuming that it does original issue reported on code google com by je garzazambrano net on apr at | 1 |
249,876 | 7,964,907,005 | IssuesEvent | 2018-07-14 00:31:22 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | USER ISSUE: Takes minutes to access chests or anything else with items in it | Medium Priority Optimization | **Version:** 0.7.2.5 beta
**Steps to Reproduce:**
Just click any chest e.g.
**Expected behavior:**
It should appear directly
**Actual behavior:**
Im waiting 10 seconds at least | 1.0 | USER ISSUE: Takes minutes to access chests or anything else with items in it - **Version:** 0.7.2.5 beta
**Steps to Reproduce:**
Just click any chest e.g.
**Expected behavior:**
It should appear directly
**Actual behavior:**
Im waiting 10 seconds at least | priority | user issue takes minutes to access chests or anything else with items in it version beta steps to reproduce just click any chest e g expected behavior it should appear directly actual behavior im waiting seconds at least | 1 |
55,949 | 3,075,570,032 | IssuesEvent | 2015-08-20 14:19:39 | RobotiumTech/robotium | https://api.github.com/repos/RobotiumTech/robotium | closed | Can't click on contextual actionbar overflow menu item | bug imported invalid Priority-Medium | _From [[email protected]](https://code.google.com/u/118006805662722538912/) on March 30, 2014 04:02:08_
1.I want to select "Delete" or any action from contextual actionbar overflow menu in my app to test.
2.I tried to use many functions:
Solo.ClickOnMenuItem("Delete");
Solo.ClickOnText("Delete");
And sure I tried ClickOnView but all are not working.
What Shall I do ?!! What version of the product are you using? On what operating system? Android 4.0.3 Please provide any additional information below.
_Original issue: http://code.google.com/p/robotium/issues/detail?id=593_ | 1.0 | Can't click on contextual actionbar overflow menu item - _From [[email protected]](https://code.google.com/u/118006805662722538912/) on March 30, 2014 04:02:08_
1.I want to select "Delete" or any action from contextual actionbar overflow menu in my app to test.
2.I tried to use many functions:
Solo.ClickOnMenuItem("Delete");
Solo.ClickOnText("Delete");
And sure I tried ClickOnView but all are not working.
What Shall I do ?!! What version of the product are you using? On what operating system? Android 4.0.3 Please provide any additional information below.
_Original issue: http://code.google.com/p/robotium/issues/detail?id=593_ | priority | can t click on contextual actionbar overflow menu item from on march i want to select delete or any action from contextual actionbar overflow menu in my app to test i tried to use many functions solo clickonmenuitem delete solo clickontext delete and sure i tried clickonview but all are not working what shall i do what version of the product are you using on what operating system android please provide any additional information below original issue | 1 |
711,797 | 24,475,615,417 | IssuesEvent | 2022-10-08 05:48:11 | returntocorp/semgrep | https://api.github.com/repos/returntocorp/semgrep | closed | Permission denied when using semgrep docker image in pre-commit | priority:medium devops docker | ```
[pad@thinkstation semgrep (precommit_jsonnet)]$ pre-commit run --verbose --hook-stage manual semgrep-docker-develop --all
Semgrep Develop Python...................................................Failed
- hook id: semgrep-docker-develop
- duration: 10.98s
- exit code: 1
======= DEPRECATION WARNING =======
The returntocorp/semgrep Docker image's custom entrypoint will be removed by June 2022.
Please update your command to explicitly call semgrep.
Change from: docker run -v $(pwd):/src returntocorp/semgrep --config p/python --error cli/src/semgrep/formatter/base.py cli/src/semgrep/core_output.py cli/src/semgrep/verbose_logging.py cli/src/semgrep/target_manager.py cli/src/semgrep/bytesize.py perf/corpus.py
Change to: docker run -v $(pwd):/src returntocorp/semgrep semgrep --config p/python --error cli/src/semgrep/formatter/base.py cli/src/semgrep/core_output.py cli/src/semgrep/verbose_logging.py cli/src/semgrep/target_manager.py cli/src/semgrep/bytesize.py perf/corpus.py
Traceback (most recent call last):
File "/usr/local/bin/semgrep", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.10/site-packages/semgrep/__main__.py", line 8, in main
cli()
File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1130, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1654, in invoke
super().invoke(ctx)
File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.10/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/click/decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/semgrep/cli.py", line 58, in cli
state = get_state()
File "/usr/local/lib/python3.10/site-packages/semgrep/state.py", line 40, in get_state
return ctx.ensure_object(SemgrepState)
File "/usr/local/lib/python3.10/site-packages/click/core.py", line 638, in ensure_object
self.obj = rv = object_type()
File "<attrs generated init semgrep.state.SemgrepState>", line 26, in __init__
File "<attrs generated init semgrep.terminal.Terminal>", line 5, in __init__
File "/usr/local/lib/python3.10/site-packages/semgrep/terminal.py", line 24, in __attrs_post_init__
self.configure()
File "/usr/local/lib/python3.10/site-packages/semgrep/terminal.py", line 62, in configure
env.user_log_file.parent.mkdir(parents=True, exist_ok=True)
File "/usr/local/lib/python3.10/pathlib.py", line 1175, in mkdir
self._accessor.mkdir(self, mode)
PermissionError: [Errno 13] Permission denied: '/.semgrep'
```
| 1.0 | Permission denied when using semgrep docker image in pre-commit - ```
[pad@thinkstation semgrep (precommit_jsonnet)]$ pre-commit run --verbose --hook-stage manual semgrep-docker-develop --all
Semgrep Develop Python...................................................Failed
- hook id: semgrep-docker-develop
- duration: 10.98s
- exit code: 1
======= DEPRECATION WARNING =======
The returntocorp/semgrep Docker image's custom entrypoint will be removed by June 2022.
Please update your command to explicitly call semgrep.
Change from: docker run -v $(pwd):/src returntocorp/semgrep --config p/python --error cli/src/semgrep/formatter/base.py cli/src/semgrep/core_output.py cli/src/semgrep/verbose_logging.py cli/src/semgrep/target_manager.py cli/src/semgrep/bytesize.py perf/corpus.py
Change to: docker run -v $(pwd):/src returntocorp/semgrep semgrep --config p/python --error cli/src/semgrep/formatter/base.py cli/src/semgrep/core_output.py cli/src/semgrep/verbose_logging.py cli/src/semgrep/target_manager.py cli/src/semgrep/bytesize.py perf/corpus.py
Traceback (most recent call last):
File "/usr/local/bin/semgrep", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.10/site-packages/semgrep/__main__.py", line 8, in main
cli()
File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1130, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1654, in invoke
super().invoke(ctx)
File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.10/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/click/decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/semgrep/cli.py", line 58, in cli
state = get_state()
File "/usr/local/lib/python3.10/site-packages/semgrep/state.py", line 40, in get_state
return ctx.ensure_object(SemgrepState)
File "/usr/local/lib/python3.10/site-packages/click/core.py", line 638, in ensure_object
self.obj = rv = object_type()
File "<attrs generated init semgrep.state.SemgrepState>", line 26, in __init__
File "<attrs generated init semgrep.terminal.Terminal>", line 5, in __init__
File "/usr/local/lib/python3.10/site-packages/semgrep/terminal.py", line 24, in __attrs_post_init__
self.configure()
File "/usr/local/lib/python3.10/site-packages/semgrep/terminal.py", line 62, in configure
env.user_log_file.parent.mkdir(parents=True, exist_ok=True)
File "/usr/local/lib/python3.10/pathlib.py", line 1175, in mkdir
self._accessor.mkdir(self, mode)
PermissionError: [Errno 13] Permission denied: '/.semgrep'
```
| priority | permission denied when using semgrep docker image in pre commit pre commit run verbose hook stage manual semgrep docker develop all semgrep develop python failed hook id semgrep docker develop duration exit code deprecation warning the returntocorp semgrep docker image s custom entrypoint will be removed by june please update your command to explicitly call semgrep change from docker run v pwd src returntocorp semgrep config p python error cli src semgrep formatter base py cli src semgrep core output py cli src semgrep verbose logging py cli src semgrep target manager py cli src semgrep bytesize py perf corpus py change to docker run v pwd src returntocorp semgrep semgrep config p python error cli src semgrep formatter base py cli src semgrep core output py cli src semgrep verbose logging py cli src semgrep target manager py cli src semgrep bytesize py perf corpus py traceback most recent call last file usr local bin semgrep line in sys exit main file usr local lib site packages semgrep main py line in main cli file usr local lib site packages click core py line in call return self main args kwargs file usr local lib site packages click core py line in main rv self invoke ctx file usr local lib site packages click core py line in invoke super invoke ctx file usr local lib site packages click core py line in invoke return ctx invoke self callback ctx params file usr local lib site packages click core py line in invoke return callback args kwargs file usr local lib site packages click decorators py line in new func return f get current context args kwargs file usr local lib site packages semgrep cli py line in cli state get state file usr local lib site packages semgrep state py line in get state return ctx ensure object semgrepstate file usr local lib site packages click core py line in ensure object self obj rv object type file line in init file line in init file usr local lib site packages semgrep terminal py line in attrs post init self configure file usr local lib site packages semgrep terminal py line in configure env user log file parent mkdir parents true exist ok true file usr local lib pathlib py line in mkdir self accessor mkdir self mode permissionerror permission denied semgrep | 1 |
639,333 | 20,751,488,701 | IssuesEvent | 2022-03-15 08:05:13 | bounswe/bounswe2022group4 | https://api.github.com/repos/bounswe/bounswe2022group4 | closed | Listing security requirements for system requirements | Priority - Medium Status: Review Needed Status: In Progress Difficulty - Easy | Security requirements for the system will be listed. | 1.0 | Listing security requirements for system requirements - Security requirements for the system will be listed. | priority | listing security requirements for system requirements security requirements for the system will be listed | 1 |
485,236 | 13,963,002,984 | IssuesEvent | 2020-10-25 12:16:46 | AY2021S1-CS2103-T16-3/tp | https://api.github.com/repos/AY2021S1-CS2103-T16-3/tp | closed | Allow user to view rooms and students side by side | priority.Medium | This will make it easier for users to add room allocations. | 1.0 | Allow user to view rooms and students side by side - This will make it easier for users to add room allocations. | priority | allow user to view rooms and students side by side this will make it easier for users to add room allocations | 1 |
623,602 | 19,673,460,142 | IssuesEvent | 2022-01-11 09:53:19 | ossarioglu/SWE573-repo | https://api.github.com/repos/ossarioglu/SWE573-repo | closed | Add Unit Tests | learning priority_medium effort_size_medium coding | - Learn how to do unit tests on Python
- Add unit tests for all coding
- Check tests are properly working | 1.0 | Add Unit Tests - - Learn how to do unit tests on Python
- Add unit tests for all coding
- Check tests are properly working | priority | add unit tests learn how to do unit tests on python add unit tests for all coding check tests are properly working | 1 |
242,250 | 7,840,043,080 | IssuesEvent | 2018-06-18 15:16:00 | inverse-inc/packetfence | https://api.github.com/repos/inverse-inc/packetfence | closed | Configurator cannot start services | Priority: Medium Type: Bug | According to an external source, the configurator on the latest devel doesn't seem to be started.
We would need to confirm this in the Inverse lab and correct this if needed.
@atran-inverse, can you build a new machine on the latest devel package and try it out | 1.0 | Configurator cannot start services - According to an external source, the configurator on the latest devel doesn't seem to be started.
We would need to confirm this in the Inverse lab and correct this if needed.
@atran-inverse, can you build a new machine on the latest devel package and try it out | priority | configurator cannot start services according to an external source the configurator on the latest devel doesn t seem to be started we would need to confirm this in the inverse lab and correct this if needed atran inverse can you build a new machine on the latest devel package and try it out | 1 |
536,108 | 15,704,225,656 | IssuesEvent | 2021-03-26 14:46:02 | HabitRPG/habitica-ios | https://api.github.com/repos/HabitRPG/habitica-ios | opened | Seasonal Event improvements | Priority: medium Type: Enhancement | - [ ] Add in clock icon to the corner of seasonal items that are only available for a limited time
- [ ] Add in banner to the item modal that says when the seasonal item will go away
- [ ] Add UI to the Seasonal Shop menu item that says when the shop is open and for how long
- [ ] Add UI to the Market menu item that says when seasonal potions are available
- [ ] For modal banner and menu time, when within 24h of event being over, convert to count down ie '16h 8m' | 1.0 | Seasonal Event improvements - - [ ] Add in clock icon to the corner of seasonal items that are only available for a limited time
- [ ] Add in banner to the item modal that says when the seasonal item will go away
- [ ] Add UI to the Seasonal Shop menu item that says when the shop is open and for how long
- [ ] Add UI to the Market menu item that says when seasonal potions are available
- [ ] For modal banner and menu time, when within 24h of event being over, convert to count down ie '16h 8m' | priority | seasonal event improvements add in clock icon to the corner of seasonal items that are only available for a limited time add in banner to the item modal that says when the seasonal item will go away add ui to the seasonal shop menu item that says when the shop is open and for how long add ui to the market menu item that says when seasonal potions are available for modal banner and menu time when within of event being over convert to count down ie | 1 |
177,243 | 6,576,058,600 | IssuesEvent | 2017-09-11 18:22:05 | opencurrents/opencurrents | https://api.github.com/repos/opencurrents/opencurrents | opened | invite-volunteers: Add personal message to invite-volunteers | mvp priority medium | Personal message should be attached to admin, not the event. This way we can autopopulate the message when an admin creates another event. | 1.0 | invite-volunteers: Add personal message to invite-volunteers - Personal message should be attached to admin, not the event. This way we can autopopulate the message when an admin creates another event. | priority | invite volunteers add personal message to invite volunteers personal message should be attached to admin not the event this way we can autopopulate the message when an admin creates another event | 1 |
100,181 | 4,080,322,487 | IssuesEvent | 2016-05-31 01:05:27 | nim-lang/Nim | https://api.github.com/repos/nim-lang/Nim | closed | Slurp doesn't work with relative imports | Medium Priority Path Handling | Slurp opens files using paths relative to the module using ``slurp``. However this breaks when the module calling ``slurp`` is imported with the ``import path/to/module`` syntax. This can be tested with the following tree structure and files:
```nimrod
# outer.nim
import private/du
show()
# private/du.nim
const txt = slurp"du.nim"
proc show*() =
echo txt
when isMainModule:
show()
```
When ``outer.nim`` is compiled the relative import of ``du.nim`` is unable to find itself, it looks for the file in the directory of ``outer.nim`` instead of ``private/du.nim``.
While this can be solved for projects avoiding relative imports and using the compiler ``--path`` switch or through a config file, it is going to be a problem for [babel packages using relative private imports](https://github.com/nimrod-code/babel/blob/master/developers.markdown#libraries). | 1.0 | Slurp doesn't work with relative imports - Slurp opens files using paths relative to the module using ``slurp``. However this breaks when the module calling ``slurp`` is imported with the ``import path/to/module`` syntax. This can be tested with the following tree structure and files:
```nimrod
# outer.nim
import private/du
show()
# private/du.nim
const txt = slurp"du.nim"
proc show*() =
echo txt
when isMainModule:
show()
```
When ``outer.nim`` is compiled the relative import of ``du.nim`` is unable to find itself, it looks for the file in the directory of ``outer.nim`` instead of ``private/du.nim``.
While this can be solved for projects avoiding relative imports and using the compiler ``--path`` switch or through a config file, it is going to be a problem for [babel packages using relative private imports](https://github.com/nimrod-code/babel/blob/master/developers.markdown#libraries). | priority | slurp doesn t work with relative imports slurp opens files using paths relative to the module using slurp however this breaks when the module calling slurp is imported with the import path to module syntax this can be tested with the following tree structure and files nimrod outer nim import private du show private du nim const txt slurp du nim proc show echo txt when ismainmodule show when outer nim is compiled the relative import of du nim is unable to find itself it looks for the file in the directory of outer nim instead of private du nim while this can be solved for projects avoiding relative imports and using the compiler path switch or through a config file it is going to be a problem for | 1 |
241,676 | 7,822,049,524 | IssuesEvent | 2018-06-14 00:00:21 | tidepool-org/chrome-uploader | https://api.github.com/repos/tidepool-org/chrome-uploader | closed | Large basal schedules cause upload failure | bug: development priority: medium severity: major | ### Description
When a 600-series pump has a large basal schedule (tested with 1 per hour), pump download fails.
### Steps to Reproduce
1. Create a new basal schedule on the pump with a different basal rate every hour
1. Attempt to download data from pump
### Environment
**Uploader version**: 0.310.0-alpha.11.135.gb0298b2e
**OS**: macOS | 1.0 | Large basal schedules cause upload failure - ### Description
When a 600-series pump has a large basal schedule (tested with 1 per hour), pump download fails.
### Steps to Reproduce
1. Create a new basal schedule on the pump with a different basal rate every hour
1. Attempt to download data from pump
### Environment
**Uploader version**: 0.310.0-alpha.11.135.gb0298b2e
**OS**: macOS | priority | large basal schedules cause upload failure description when a series pump has a large basal schedule tested with per hour pump download fails steps to reproduce create a new basal schedule on the pump with a different basal rate every hour attempt to download data from pump environment uploader version alpha os macos | 1 |
175,561 | 6,552,277,584 | IssuesEvent | 2017-09-05 17:40:41 | washingtonstateuniversity/WSU-People-Directory | https://api.github.com/repos/washingtonstateuniversity/WSU-People-Directory | reopened | Introduce a base person object that provides consistent profile data | enhancement priority:medium | It should be possible to write `$person = get_wsu_person( 123 );` or similar and get back a `WSU_Person` object (or similar) and then reference stuff like:
```
First Name: <?php echo esc_html( $person->ad_first_name ); ?>
Last Name: <?php echo esc_html( $person->ad_last_name ); ?>
```
This can probably allow for easy extension by others in the future. It should also help us avoid rewriting similar code in many places to capture all of the various fields associated with a person.
I don't think this is time critical, but could come in handy soon. | 1.0 | Introduce a base person object that provides consistent profile data - It should be possible to write `$person = get_wsu_person( 123 );` or similar and get back a `WSU_Person` object (or similar) and then reference stuff like:
```
First Name: <?php echo esc_html( $person->ad_first_name ); ?>
Last Name: <?php echo esc_html( $person->ad_last_name ); ?>
```
This can probably allow for easy extension by others in the future. It should also help us avoid rewriting similar code in many places to capture all of the various fields associated with a person.
I don't think this is time critical, but could come in handy soon. | priority | introduce a base person object that provides consistent profile data it should be possible to write person get wsu person or similar and get back a wsu person object or similar and then reference stuff like first name ad first name last name ad last name this can probably allow for easy extension by others in the future it should also help us avoid rewriting similar code in many places to capture all of the various fields associated with a person i don t think this is time critical but could come in handy soon | 1 |
696,626 | 23,908,365,242 | IssuesEvent | 2022-09-09 05:01:20 | Gilded-Games/The-Aether | https://api.github.com/repos/Gilded-Games/The-Aether | closed | Feature: JEI compatibility | priority/medium status/pending-review type/compatibility type/feature version/1.19 status/dependent | - [x] Altar and Freezer recipes, with support for displaying custom fuels.
- [x] Incubators should have support to display eggs that can be incubated. | 1.0 | Feature: JEI compatibility - - [x] Altar and Freezer recipes, with support for displaying custom fuels.
- [x] Incubators should have support to display eggs that can be incubated. | priority | feature jei compatibility altar and freezer recipes with support for displaying custom fuels incubators should have support to display eggs that can be incubated | 1 |
815,462 | 30,556,190,628 | IssuesEvent | 2023-07-20 11:52:42 | ubiquity/ubiquibot | https://api.github.com/repos/ubiquity/ubiquibot | closed | Query User Information (Wallet Address, Multiplier) | Time: <1 Hour Priority: 1 (Medium) Price: 37.5 USD | I have been doing some manual payouts lately and realized that sometimes it's nice to be able to quickly identify a user's `multiplier` and `wallet` address without superuser access to the global bot database!
We should have a new command to view a user's settings.
`/query @user`
Unless there is a better, more expressive command name than "query"
---
> I'll need to send some xDAI to both participants because this was my fault for not ensuring that the bounty is valid.
> https://gnosisscan.io/tx/0x3162608e9e18f71b4b29b0780904b664d160d0d79b0f761569f0d590b5f16044 @me505
> https://gnosisscan.io/tx/0xbb9e74ab8951f5cc45fb38dd1d56f7e04e3c0ba6db98bfbaf58bfe63043d950a @AnakinSkywalkeer
_Originally posted by @pavlovcik in https://github.com/ubiquity/ubiquity-dollar/issues/562#issuecomment-1630854865_
| 1.0 | Query User Information (Wallet Address, Multiplier) - I have been doing some manual payouts lately and realized that sometimes it's nice to be able to quickly identify a user's `multiplier` and `wallet` address without superuser access to the global bot database!
We should have a new command to view a user's settings.
`/query @user`
Unless there is a better, more expressive command name than "query"
---
> I'll need to send some xDAI to both participants because this was my fault for not ensuring that the bounty is valid.
> https://gnosisscan.io/tx/0x3162608e9e18f71b4b29b0780904b664d160d0d79b0f761569f0d590b5f16044 @me505
> https://gnosisscan.io/tx/0xbb9e74ab8951f5cc45fb38dd1d56f7e04e3c0ba6db98bfbaf58bfe63043d950a @AnakinSkywalkeer
_Originally posted by @pavlovcik in https://github.com/ubiquity/ubiquity-dollar/issues/562#issuecomment-1630854865_
| priority | query user information wallet address multiplier i have been doing some manual payouts lately and realized that sometimes it s nice to be able to quickly identify a user s multiplier and wallet address without superuser access to the global bot database we should have a new command to view a user s settings query user unless there is a better more expressive command name than query i ll need to send some xdai to both participants because this was my fault for not ensuring that the bounty is valid anakinskywalkeer originally posted by pavlovcik in | 1 |
421,394 | 12,256,550,090 | IssuesEvent | 2020-05-06 12:17:47 | HabitRPG/habitica | https://api.github.com/repos/HabitRPG/habitica | closed | Reset should unassign Stat Points (previously known as Attribute Points) | good first issue priority: medium section: Settings status: issue: in progress | When you use [Reset Account](https://habitica.fandom.com/wiki/Settings#Reset_Account) (from User Icon > Settings > "Danger Zone" section), it correctly sets your Level to 1 but it does not reset your Stat Points. It should reset them as described below.
Ideally we'd fix this with the generic function described at https://github.com/HabitRPG/habitica/issues/6482 but that requires a large body of work and no one has taken it up yet. Doing a fix for Reset alone will be much simpler and will bring immediate benefits (e.g., for new players who were experimenting with Habitica and want a fresh start).
I'm marking this as "good first issue" although it's larger than we'd normally use that label for. The information below should help a new contributor though and you can comment here if you have any questions.
Stat Points are stored in a user's account like this:
```
{
"stats": {
"lvl": 14,
"points": 2,
"str": 3,
"con": 3,
"int": 3,
"per": 3
}
}
```
That's a user at level 14. They have 3 points allocated to each of the Stats (STRength, CONstitution, INTelligence, PERception), and they have 2 points unassigned.
After using Reset currently, the player's account will look like this, which is wrong:
```
{
"stats": {
"lvl": 1,
"points": 2,
"str": 3,
"con": 3,
"int": 3,
"per": 3
}
}
```
It **should** look like this:
```
{
"stats": {
"lvl": 1,
"points": 1,
"str": 0,
"con": 0,
"int": 0,
"per": 0
}
}
```
I.e., `stats.points` should the same as `stats.lvl` (1, the user's level) and the individual Stats should be set to 0.
This is where Reset sets the level to 1 so it would be a good place to reset the Stat Points:
https://github.com/HabitRPG/habitica/blob/d03e5e93b09f5a16598ed63b911c074d28c62368/website/common/script/ops/reset.js#L4-L9
We would want tests for this too. Below is the test for how Reset changes the level to 1. Tests for setting `stats.points` to 1 and the individual stats to 0 should be in the same file and can use similar code.
https://github.com/HabitRPG/habitica/blob/d03e5e93b09f5a16598ed63b911c074d28c62368/test/common/ops/reset.js#L47-L53
| 1.0 | Reset should unassign Stat Points (previously known as Attribute Points) - When you use [Reset Account](https://habitica.fandom.com/wiki/Settings#Reset_Account) (from User Icon > Settings > "Danger Zone" section), it correctly sets your Level to 1 but it does not reset your Stat Points. It should reset them as described below.
Ideally we'd fix this with the generic function described at https://github.com/HabitRPG/habitica/issues/6482 but that requires a large body of work and no one has taken it up yet. Doing a fix for Reset alone will be much simpler and will bring immediate benefits (e.g., for new players who were experimenting with Habitica and want a fresh start).
I'm marking this as "good first issue" although it's larger than we'd normally use that label for. The information below should help a new contributor though and you can comment here if you have any questions.
Stat Points are stored in a user's account like this:
```
{
"stats": {
"lvl": 14,
"points": 2,
"str": 3,
"con": 3,
"int": 3,
"per": 3
}
}
```
That's a user at level 14. They have 3 points allocated to each of the Stats (STRength, CONstitution, INTelligence, PERception), and they have 2 points unassigned.
After using Reset currently, the player's account will look like this, which is wrong:
```
{
"stats": {
"lvl": 1,
"points": 2,
"str": 3,
"con": 3,
"int": 3,
"per": 3
}
}
```
It **should** look like this:
```
{
"stats": {
"lvl": 1,
"points": 1,
"str": 0,
"con": 0,
"int": 0,
"per": 0
}
}
```
I.e., `stats.points` should the same as `stats.lvl` (1, the user's level) and the individual Stats should be set to 0.
This is where Reset sets the level to 1 so it would be a good place to reset the Stat Points:
https://github.com/HabitRPG/habitica/blob/d03e5e93b09f5a16598ed63b911c074d28c62368/website/common/script/ops/reset.js#L4-L9
We would want tests for this too. Below is the test for how Reset changes the level to 1. Tests for setting `stats.points` to 1 and the individual stats to 0 should be in the same file and can use similar code.
https://github.com/HabitRPG/habitica/blob/d03e5e93b09f5a16598ed63b911c074d28c62368/test/common/ops/reset.js#L47-L53
| priority | reset should unassign stat points previously known as attribute points when you use from user icon settings danger zone section it correctly sets your level to but it does not reset your stat points it should reset them as described below ideally we d fix this with the generic function described at but that requires a large body of work and no one has taken it up yet doing a fix for reset alone will be much simpler and will bring immediate benefits e g for new players who were experimenting with habitica and want a fresh start i m marking this as good first issue although it s larger than we d normally use that label for the information below should help a new contributor though and you can comment here if you have any questions stat points are stored in a user s account like this stats lvl points str con int per that s a user at level they have points allocated to each of the stats strength constitution intelligence perception and they have points unassigned after using reset currently the player s account will look like this which is wrong stats lvl points str con int per it should look like this stats lvl points str con int per i e stats points should the same as stats lvl the user s level and the individual stats should be set to this is where reset sets the level to so it would be a good place to reset the stat points we would want tests for this too below is the test for how reset changes the level to tests for setting stats points to and the individual stats to should be in the same file and can use similar code | 1 |
413,897 | 12,093,280,750 | IssuesEvent | 2020-04-19 18:59:41 | busy-beaver-dev/busy-beaver | https://api.github.com/repos/busy-beaver-dev/busy-beaver | closed | DevOps Best Practices | effort high epic priority medium | - [x] Infrastructure as Code with Terraform; automated via https://www.runatlantis.io/
- [x] **Staging environment** to confirm that changes work as expected. [Best Practices of Staging](https://increment.com/development/center-stage-best-practices-for-staging-environments/) | 1.0 | DevOps Best Practices - - [x] Infrastructure as Code with Terraform; automated via https://www.runatlantis.io/
- [x] **Staging environment** to confirm that changes work as expected. [Best Practices of Staging](https://increment.com/development/center-stage-best-practices-for-staging-environments/) | priority | devops best practices infrastructure as code with terraform automated via staging environment to confirm that changes work as expected | 1 |
272,529 | 8,514,522,220 | IssuesEvent | 2018-10-31 18:48:18 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | Callback function le_param_req() is never called. | area: Bluetooth bug nRF priority: medium | A simplest example is samples\bluetooth\peripheral_dis.
Board nrf51_pca10028.
Minor changes to the main.c file.
Functions le_param_req() and le_param_updated() added to conn_callbacks{} structure.
```
static bool le_param_req(struct bt_conn *conn, ////
struct bt_le_conn_param *param) //
{ //
printk("le_param_req\n"); //
return true; //
} // added lines
//
static void le_param_updated(struct bt_conn *conn, u16_t interval, //
u16_t latency, u16_t timeout) //
{ //
printk("le_param_updated\n"); //
} ////
static struct bt_conn_cb conn_callbacks = {
.connected = connected,
.disconnected = disconnected,
.le_param_req = le_param_req, // added line
.le_param_updated = le_param_updated, // added line
};
```
For the test, nRF connect PC is used. Three simple steps:
1. Connection
2. Change connection parameter
3. Disconnection
Log:
```
***** Booting Zephyr OS 1.13.99 *****
Bluetooth initialized
Advertising successfully started
Connected
le_param_updated
le_param_updated
Disconnected (reason 19)
```
It can be seen that the parameters have changed, there is call le_param_updated(), but there is no call le_param_req().
| 1.0 | Callback function le_param_req() is never called. - A simplest example is samples\bluetooth\peripheral_dis.
Board nrf51_pca10028.
Minor changes to the main.c file.
Functions le_param_req() and le_param_updated() added to conn_callbacks{} structure.
```
static bool le_param_req(struct bt_conn *conn, ////
struct bt_le_conn_param *param) //
{ //
printk("le_param_req\n"); //
return true; //
} // added lines
//
static void le_param_updated(struct bt_conn *conn, u16_t interval, //
u16_t latency, u16_t timeout) //
{ //
printk("le_param_updated\n"); //
} ////
static struct bt_conn_cb conn_callbacks = {
.connected = connected,
.disconnected = disconnected,
.le_param_req = le_param_req, // added line
.le_param_updated = le_param_updated, // added line
};
```
For the test, nRF connect PC is used. Three simple steps:
1. Connection
2. Change connection parameter
3. Disconnection
Log:
```
***** Booting Zephyr OS 1.13.99 *****
Bluetooth initialized
Advertising successfully started
Connected
le_param_updated
le_param_updated
Disconnected (reason 19)
```
It can be seen that the parameters have changed, there is call le_param_updated(), but there is no call le_param_req().
| priority | callback function le param req is never called a simplest example is samples bluetooth peripheral dis board minor changes to the main c file functions le param req and le param updated added to conn callbacks structure static bool le param req struct bt conn conn struct bt le conn param param printk le param req n return true added lines static void le param updated struct bt conn conn t interval t latency t timeout printk le param updated n static struct bt conn cb conn callbacks connected connected disconnected disconnected le param req le param req added line le param updated le param updated added line for the test nrf connect pc is used three simple steps connection change connection parameter disconnection log booting zephyr os bluetooth initialized advertising successfully started connected le param updated le param updated disconnected reason it can be seen that the parameters have changed there is call le param updated but there is no call le param req | 1 |