Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 5
112
| repo_url
stringlengths 34
141
| action
stringclasses 3
values | title
stringlengths 1
957
| labels
stringlengths 4
795
| body
stringlengths 1
259k
| index
stringclasses 12
values | text_combine
stringlengths 96
259k
| label
stringclasses 2
values | text
stringlengths 96
252k
| binary_label
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,094 | 2,507,645,323 | IssuesEvent | 2015-01-12 19:46:46 | ubc/acj-versus | https://api.github.com/repos/ubc/acj-versus | opened | Fix evaluation period fields' date formatting | medium priority | When the following sequence of events happen on the question creation/edit form, the date formatting is broken:
- check the set evaluation period
- uncheck the evaluation period
- check again | 1.0 | Fix evaluation period fields' date formatting - When the following sequence of events happen on the question creation/edit form, the date formatting is broken:
- check the set evaluation period
- uncheck the evaluation period
- check again | priority | fix evaluation period fields date formatting when the following sequence of events happen on the question creation edit form the date formatting is broken check the set evaluation period uncheck the evaluation period check again | 1 |
249,125 | 7,953,855,203 | IssuesEvent | 2018-07-12 04:18:46 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | Lagging server | Medium Priority Optimization | I bought a VPS to start my server on so me and my friends could play without being abused be admins
but the problem is we all get lag, ping from 900 to 3000, I tried to edit some stuff in config in order to fix the problem but the lag still exist
VPS specs
Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50 GHz
2.00 GB RAM
64-bit windows server 2008 R2 Enterprise
internet speed above 200MB
I edited these settings in eco config: wirtetimeout: 3000 readtimeout: 3000
storage saving: 3600
compression level: best speed | 1.0 | Lagging server - I bought a VPS to start my server on so me and my friends could play without being abused be admins
but the problem is we all get lag, ping from 900 to 3000, I tried to edit some stuff in config in order to fix the problem but the lag still exist
VPS specs
Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50 GHz
2.00 GB RAM
64-bit windows server 2008 R2 Enterprise
internet speed above 200MB
I edited these settings in eco config: wirtetimeout: 3000 readtimeout: 3000
storage saving: 3600
compression level: best speed | priority | lagging server i bought a vps to start my server on so me and my friends could play without being abused be admins but the problem is we all get lag ping from to i tried to edit some stuff in config in order to fix the problem but the lag still exist vps specs intel r xeon r platinum cpu ghz gb ram bit windows server enterprise internet speed above i edited these settings in eco config wirtetimeout readtimeout storage saving compression level best speed | 1 |
89,219 | 3,790,673,575 | IssuesEvent | 2016-03-21 22:23:57 | Sonarr/Sonarr | https://api.github.com/repos/Sonarr/Sonarr | closed | Support for non-english metadata | enhancement priority:medium | Some series don't have English metadata on TheTVDB, so Sonarr needs to support getting metadata in other languages. | 1.0 | Support for non-english metadata - Some series don't have English metadata on TheTVDB, so Sonarr needs to support getting metadata in other languages. | priority | support for non english metadata some series don t have english metadata on thetvdb so sonarr needs to support getting metadata in other languages | 1 |
690,665 | 23,668,793,253 | IssuesEvent | 2022-08-27 02:57:43 | decline-cookies/anvil-unity-core | https://api.github.com/repos/decline-cookies/anvil-unity-core | closed | Support Higher Levels of AssemblyStripping | effort-high priority-medium status-in-review type-bug | Currently Anvil does not support higher levels of Assembly Stripping than `Low` (Unity 2021.1.0).
When the compiler is set to higher levels of stripping the constructors for some classes are stripped out of the build. Constructors are stripped because they do not have a hard reference anywhere. This is common for classes that are instantiated through `Activator.CreateInstance` to incorporate a specific implementation into a more generic system (Ex: Update sources for `Anvil.CSharp.DelayedExecution`.
Some issues will require fixes in [anvil-csharp-core](https://github.com/scratch-games/anvil-csharp-core). This is tracked via: https://github.com/scratch-games/anvil-csharp-core/issues/49
See the following links for more causes of stripping related issues and potential solutions.
- https://docs.unity3d.com/Manual/ScriptingRestrictions.html
- https://docs.unity3d.com/Manual/ManagedCodeStripping.html
- `Link.xml` was used to resolve some stripping issues as part of #46. It may not be suitable for all situations
# Examples
- All subclasses of `AbstractUnityUpdateSource`. For example, `UnityFixedUpdateSource`
- The simple fix is to mark all `AbstractUnityUpdateSource` subclasses with the [UnityEngine.Scripting.PreserveAttribute](https://docs.unity3d.com/2021.1/Documentation/ScriptReference/Scripting.PreserveAttribute.html) but this isn't very maintainable and keeps the classes even when they are not being used.
- TODO: Add more as they are encountered
# To Reproduce
1. Add anvil to a Unity project
2. Make use of one of the classes mentioned in the examples above
3. Set project settings to `Managed Stripping Level: High`
4. Export and run a build
5. Notice `ExecutionEngineException` thrown. | 1.0 | Support Higher Levels of AssemblyStripping - Currently Anvil does not support higher levels of Assembly Stripping than `Low` (Unity 2021.1.0).
When the compiler is set to higher levels of stripping the constructors for some classes are stripped out of the build. Constructors are stripped because they do not have a hard reference anywhere. This is common for classes that are instantiated through `Activator.CreateInstance` to incorporate a specific implementation into a more generic system (Ex: Update sources for `Anvil.CSharp.DelayedExecution`.
Some issues will require fixes in [anvil-csharp-core](https://github.com/scratch-games/anvil-csharp-core). This is tracked via: https://github.com/scratch-games/anvil-csharp-core/issues/49
See the following links for more causes of stripping related issues and potential solutions.
- https://docs.unity3d.com/Manual/ScriptingRestrictions.html
- https://docs.unity3d.com/Manual/ManagedCodeStripping.html
- `Link.xml` was used to resolve some stripping issues as part of #46. It may not be suitable for all situations
# Examples
- All subclasses of `AbstractUnityUpdateSource`. For example, `UnityFixedUpdateSource`
- The simple fix is to mark all `AbstractUnityUpdateSource` subclasses with the [UnityEngine.Scripting.PreserveAttribute](https://docs.unity3d.com/2021.1/Documentation/ScriptReference/Scripting.PreserveAttribute.html) but this isn't very maintainable and keeps the classes even when they are not being used.
- TODO: Add more as they are encountered
# To Reproduce
1. Add anvil to a Unity project
2. Make use of one of the classes mentioned in the examples above
3. Set project settings to `Managed Stripping Level: High`
4. Export and run a build
5. Notice `ExecutionEngineException` thrown. | priority | support higher levels of assemblystripping currently anvil does not support higher levels of assembly stripping than low unity when the compiler is set to higher levels of stripping the constructors for some classes are stripped out of the build constructors are stripped because they do not have a hard reference anywhere this is common for classes that are instantiated through activator createinstance to incorporate a specific implementation into a more generic system ex update sources for anvil csharp delayedexecution some issues will require fixes in this is tracked via see the following links for more causes of stripping related issues and potential solutions link xml was used to resolve some stripping issues as part of it may not be suitable for all situations examples all subclasses of abstractunityupdatesource for example unityfixedupdatesource the simple fix is to mark all abstractunityupdatesource subclasses with the but this isn t very maintainable and keeps the classes even when they are not being used todo add more as they are encountered to reproduce add anvil to a unity project make use of one of the classes mentioned in the examples above set project settings to managed stripping level high export and run a build notice executionengineexception thrown | 1 |
87,642 | 3,756,481,879 | IssuesEvent | 2016-03-13 11:12:26 | sbt-compiler-maven-plugin/sbt-compiler-maven-plugin | https://api.github.com/repos/sbt-compiler-maven-plugin/sbt-compiler-maven-plugin | closed | Reduce the number of SBT 0.13.x compatible compilers to one | Component-Compiler-API Component-Compiler-SBT0131 Component-Compiler-SBT0132 Component-Compiler-SBT0135 Component-Compiler-SBT0136 Component-Compiler-SBT0137 Component-Compiler-SBT0138 Component-Compiler-SBT0139 Priority-Medium Type-Task | Different `0.13.x` compatible compilers using different Zinc `0.3.x` versions does not differ much.
There were some differences in incremental compilation cache file in SBT `0.13.x` early versions, but latest versions look identical in configuration and functioning. I think the differences in latest SBT `0.13.x` versions are outside of the compiler component.
Having one, the latest `0.13.9`, compiler implementation will be enough. All previous `0.13.x` (`0.13.0` - `0.13.8`) compilers will be removed. `0.13.9` compiler will be selected for all `0.13.x` values of `sbtVersion` plugin configuration parameter (or `sbt.version` project property).
| 1.0 | Reduce the number of SBT 0.13.x compatible compilers to one - Different `0.13.x` compatible compilers using different Zinc `0.3.x` versions does not differ much.
There were some differences in incremental compilation cache file in SBT `0.13.x` early versions, but latest versions look identical in configuration and functioning. I think the differences in latest SBT `0.13.x` versions are outside of the compiler component.
Having one, the latest `0.13.9`, compiler implementation will be enough. All previous `0.13.x` (`0.13.0` - `0.13.8`) compilers will be removed. `0.13.9` compiler will be selected for all `0.13.x` values of `sbtVersion` plugin configuration parameter (or `sbt.version` project property).
| priority | reduce the number of sbt x compatible compilers to one different x compatible compilers using different zinc x versions does not differ much there were some differences in incremental compilation cache file in sbt x early versions but latest versions look identical in configuration and functioning i think the differences in latest sbt x versions are outside of the compiler component having one the latest compiler implementation will be enough all previous x compilers will be removed compiler will be selected for all x values of sbtversion plugin configuration parameter or sbt version project property | 1 |
114,114 | 4,614,436,104 | IssuesEvent | 2016-09-25 15:37:05 | CascadesCarnivoreProject/Carnassial | https://api.github.com/repos/CascadesCarnivoreProject/Carnassial | closed | Timelapse UX: crash on invalid ImageFilter value inserted by DataEntryChoice | Medium Priority fix | DataEntryChoice..ctor() has
// Add a separator and an empty field to the list, so the user can return it to an empty state. Note that this means ImageQuality can also be empty.. not sure if this is a good thing
this.ContentControl.Items.Add(new Separator());
this.ContentControl.Items.Add(String.Empty);
this.ContentControl.SelectedIndex = 0;
which produces an exception from Enum.Parse() if a user selects the empty item in the dropdown. | 1.0 | Timelapse UX: crash on invalid ImageFilter value inserted by DataEntryChoice - DataEntryChoice..ctor() has
// Add a separator and an empty field to the list, so the user can return it to an empty state. Note that this means ImageQuality can also be empty.. not sure if this is a good thing
this.ContentControl.Items.Add(new Separator());
this.ContentControl.Items.Add(String.Empty);
this.ContentControl.SelectedIndex = 0;
which produces an exception from Enum.Parse() if a user selects the empty item in the dropdown. | priority | timelapse ux crash on invalid imagefilter value inserted by dataentrychoice dataentrychoice ctor has add a separator and an empty field to the list so the user can return it to an empty state note that this means imagequality can also be empty not sure if this is a good thing this contentcontrol items add new separator this contentcontrol items add string empty this contentcontrol selectedindex which produces an exception from enum parse if a user selects the empty item in the dropdown | 1 |
132,000 | 5,167,702,896 | IssuesEvent | 2017-01-17 19:31:53 | washingtonstateuniversity/WSU-People-Display | https://api.github.com/repos/washingtonstateuniversity/WSU-People-Display | opened | Provide the people REST URL in a single place | enhancement priority:medium | In the current code, `https://people.wsu.edu/wp-json/wp/v2/people/` is provided in a few different places. We should store this once, probably in the base class, and make it filterable when the plugin first initializes. | 1.0 | Provide the people REST URL in a single place - In the current code, `https://people.wsu.edu/wp-json/wp/v2/people/` is provided in a few different places. We should store this once, probably in the base class, and make it filterable when the plugin first initializes. | priority | provide the people rest url in a single place in the current code is provided in a few different places we should store this once probably in the base class and make it filterable when the plugin first initializes | 1 |
778,447 | 27,316,881,206 | IssuesEvent | 2023-02-24 16:22:22 | Rothamsted-Ecoinformatics/farm_rothamsted | https://api.github.com/repos/Rothamsted-Ecoinformatics/farm_rothamsted | closed | Visualisation: Hide quantities from Logs view | Priority 2: Should do (after must do's) Effort: medium Visualisation user experience/ HCD | When looking at the logs assigned to an asset, one of the farm team asked if we could please hide the quantities field from the list of displayed fields. The rationale behind this is that, because there are lots of quantities associated with the logs (especially for the logs created via the Spraying quick forms, see below) you can only see 1-2 logs on the screen at a time. Bruce thinks it would be preferable to see a longer list of 10-15 logs, which you can then click on to see the details of what was applied and in what quantities
![image](https://user-images.githubusercontent.com/16349781/167622571-a99a63ad-b347-4e41-a12b-c7bf4fec4718.png)
| 1.0 | Visualisation: Hide quantities from Logs view - When looking at the logs assigned to an asset, one of the farm team asked if we could please hide the quantities field from the list of displayed fields. The rationale behind this is that, because there are lots of quantities associated with the logs (especially for the logs created via the Spraying quick forms, see below) you can only see 1-2 logs on the screen at a time. Bruce thinks it would be preferable to see a longer list of 10-15 logs, which you can then click on to see the details of what was applied and in what quantities
![image](https://user-images.githubusercontent.com/16349781/167622571-a99a63ad-b347-4e41-a12b-c7bf4fec4718.png)
| priority | visualisation hide quantities from logs view when looking at the logs assigned to an asset one of the farm team asked if we could please hide the quantities field from the list of displayed fields the rationale behind this is that because there are lots of quantities associated with the logs especially for the logs created via the spraying quick forms see below you can only see logs on the screen at a time bruce thinks it would be preferable to see a longer list of logs which you can then click on to see the details of what was applied and in what quantities | 1 |
745,361 | 25,980,987,793 | IssuesEvent | 2022-12-19 18:49:13 | group2-cs321/AMS-COLBY | https://api.github.com/repos/group2-cs321/AMS-COLBY | closed | As a superadmin I want to have the option to create multiple users and teams by uploading a CSV file | Medium priority | Should take around 3hs | 1.0 | As a superadmin I want to have the option to create multiple users and teams by uploading a CSV file - Should take around 3hs | priority | as a superadmin i want to have the option to create multiple users and teams by uploading a csv file should take around | 1 |
567,385 | 16,857,481,120 | IssuesEvent | 2021-06-21 08:43:09 | dmwm/WMCore | https://api.github.com/repos/dmwm/WMCore | closed | Change our MySQL driver to stop returning data in the long data type | Feature change Medium Priority WMAgent | **Impact of the new feature**
WMAgent (and T0, but they don't use MySQL)
**Is your feature request related to a problem? Please describe.**
Yes, it's been discussed in this code review comment:
https://github.com/dmwm/WMCore/pull/10605/files#r654225846
in short, `hash()` is no longer reproducible between python2 and python3. So we are investigating the adoption of `hashlib.sha1` (or another algorithm in there) to have reproducible behaviour. One of the issues that we find there is that our python data structure sometimes contain long data type (returned from MariaDB), causing the resulting hash to be different (thus affecting object comparison).
Python2 max integer is very large:
```
(PY2) alannewmacpro:cmsdist amaltaro$ python -c 'import sys; print(sys.maxint)'
9223372036854775807
```
and AFAIK only small enough numbers are currently fetched from MariaDB (run/lumi/event number), maybe some other file/fileset/subscription ids. So it should not be a problem to actually stop casting MariaDB results to Python2 long data type (which is BTW gone in Python3).
**Describe the solution you'd like**
Change the MySQLdb conversion map to stop returning `long` data type, instead `int` should be returned. This is where that map lives:
https://github.com/farcepest/MySQLdb1/blob/MySQLdb-1.2.4b4/MySQLdb/converters.py#L127
According to the documentation, it should be possible to change it while creating the database connection, but I failed to do so with SQLAlchemy:
https://mysqlclient.readthedocs.io/MySQLdb.html#module-MySQLdb.converters
So my suggestion is to actually patch our version of mysqldb (py2-mysqldb.spec) such that it stops casting data to long. This is the most performant approach as well. We are about to stop deploying our services in Python2, so this change won't be there for too long.
**Describe alternatives you've considered**
Cast long to int on our client, but the performance penalty would be significant.
**Additional context**
Related - with further content - to this PR and the GH issues that it's meant to fix: https://github.com/dmwm/WMCore/pull/10605 | 1.0 | Change our MySQL driver to stop returning data in the long data type - **Impact of the new feature**
WMAgent (and T0, but they don't use MySQL)
**Is your feature request related to a problem? Please describe.**
Yes, it's been discussed in this code review comment:
https://github.com/dmwm/WMCore/pull/10605/files#r654225846
in short, `hash()` is no longer reproducible between python2 and python3. So we are investigating the adoption of `hashlib.sha1` (or another algorithm in there) to have reproducible behaviour. One of the issues that we find there is that our python data structure sometimes contain long data type (returned from MariaDB), causing the resulting hash to be different (thus affecting object comparison).
Python2 max integer is very large:
```
(PY2) alannewmacpro:cmsdist amaltaro$ python -c 'import sys; print(sys.maxint)'
9223372036854775807
```
and AFAIK only small enough numbers are currently fetched from MariaDB (run/lumi/event number), maybe some other file/fileset/subscription ids. So it should not be a problem to actually stop casting MariaDB results to Python2 long data type (which is BTW gone in Python3).
**Describe the solution you'd like**
Change the MySQLdb conversion map to stop returning `long` data type, instead `int` should be returned. This is where that map lives:
https://github.com/farcepest/MySQLdb1/blob/MySQLdb-1.2.4b4/MySQLdb/converters.py#L127
According to the documentation, it should be possible to change it while creating the database connection, but I failed to do so with SQLAlchemy:
https://mysqlclient.readthedocs.io/MySQLdb.html#module-MySQLdb.converters
So my suggestion is to actually patch our version of mysqldb (py2-mysqldb.spec) such that it stops casting data to long. This is the most performant approach as well. We are about to stop deploying our services in Python2, so this change won't be there for too long.
**Describe alternatives you've considered**
Cast long to int on our client, but the performance penalty would be significant.
**Additional context**
Related - with further content - to this PR and the GH issues that it's meant to fix: https://github.com/dmwm/WMCore/pull/10605 | priority | change our mysql driver to stop returning data in the long data type impact of the new feature wmagent and but they don t use mysql is your feature request related to a problem please describe yes it s been discussed in this code review comment in short hash is no longer reproducible between and so we are investigating the adoption of hashlib or another algorithm in there to have reproducible behaviour one of the issues that we find there is that our python data structure sometimes contain long data type returned from mariadb causing the resulting hash to be different thus affecting object comparison max integer is very large alannewmacpro cmsdist amaltaro python c import sys print sys maxint and afaik only small enough numbers are currently fetched from mariadb run lumi event number maybe some other file fileset subscription ids so it should not be a problem to actually stop casting mariadb results to long data type which is btw gone in describe the solution you d like change the mysqldb conversion map to stop returning long data type instead int should be returned this is where that map lives according to the documentation it should be possible to change it while creating the database connection but i failed to do so with sqlalchemy so my suggestion is to actually patch our version of mysqldb mysqldb spec such that it stops casting data to long this is the most performant approach as well we are about to stop deploying our services in so this change won t be there for too long describe alternatives you ve considered cast long to int on our client but the performance penalty would be significant additional context related with further content to this pr and the gh issues that it s meant to fix | 1 |
282,137 | 8,704,012,313 | IssuesEvent | 2018-12-05 18:11:16 | strapi/strapi | https://api.github.com/repos/strapi/strapi | closed | Adding a file to api config folder disables those api | priority: medium status: have to reproduce type: bug 🐛 | <!-- ⚠️ If you do not respect this template your issue will be closed. -->
<!-- =============================================================================== -->
<!-- ⚠️ If you are not using the current Strapi release, you will be asked to update. -->
<!-- Please see the wiki for guides on upgrading to the latest release. -->
<!-- =============================================================================== -->
<!-- ⚠️ Make sure to browse the opened and closed issues before submitting your issue. -->
<!-- ⚠️ Before writing your issue make sure you are using:-->
<!-- Node 10.x.x -->
<!-- npm 6.x.x -->
<!-- The latest version of Strapi. -->
**Informations**
- **Node.js version**: v10.10.0
- **npm version**: 6.4.1
- **Strapi version**: 3.0.0-alpha.14.4.0
- **Database**: 3.6.4
- **Operating system**: Ubuntu 18.04
**What is the current behavior?**
Generated api returns 404 response on all related routes.
**Steps to reproduce the problem**
1. Install new project using command `strapi new projectname`
2. Perform commands
```
cd projectname
strapi generate:api test
strapi start
```
3. Make sure generated api is working correctly, sending request from postman or browser:
```
GET http://localhost:1337/tests
```
Expected response:
```
{
"statusCode": 403,
"error": "Forbidden",
"message": "Forbidden"
}
```
We did not set up any permissions for this api, therefore it's forbidden by default.
4. Create file with name v.js ( this could be any filename starting from letter 'v' - 'v123456.js', etc. ) at the folder `$PROJECT_ROOT/api/test/config `;
This file must export function, like this:
```
function testFunction() {
console.log("this function is not called");
}
module.exports = testFunction;
```
5. Server will restart automatically.
6. Perform the same request from step 3. Result will be:
```
{
"statusCode": 404,
"error": "Not Found",
"message": "Not Found"
}
```
**What is the expected behavior?**
Expected the same response as in step 3.
**Suggested solutions**
Have no idea. | 1.0 | Adding a file to api config folder disables those api - <!-- ⚠️ If you do not respect this template your issue will be closed. -->
<!-- =============================================================================== -->
<!-- ⚠️ If you are not using the current Strapi release, you will be asked to update. -->
<!-- Please see the wiki for guides on upgrading to the latest release. -->
<!-- =============================================================================== -->
<!-- ⚠️ Make sure to browse the opened and closed issues before submitting your issue. -->
<!-- ⚠️ Before writing your issue make sure you are using:-->
<!-- Node 10.x.x -->
<!-- npm 6.x.x -->
<!-- The latest version of Strapi. -->
**Informations**
- **Node.js version**: v10.10.0
- **npm version**: 6.4.1
- **Strapi version**: 3.0.0-alpha.14.4.0
- **Database**: 3.6.4
- **Operating system**: Ubuntu 18.04
**What is the current behavior?**
Generated api returns 404 response on all related routes.
**Steps to reproduce the problem**
1. Install new project using command `strapi new projectname`
2. Perform commands
```
cd projectname
strapi generate:api test
strapi start
```
3. Make sure generated api is working correctly, sending request from postman or browser:
```
GET http://localhost:1337/tests
```
Expected response:
```
{
"statusCode": 403,
"error": "Forbidden",
"message": "Forbidden"
}
```
We did not set up any permissions for this api, therefore it's forbidden by default.
4. Create file with name v.js ( this could be any filename starting from letter 'v' - 'v123456.js', etc. ) at the folder `$PROJECT_ROOT/api/test/config `;
This file must export function, like this:
```
function testFunction() {
console.log("this function is not called");
}
module.exports = testFunction;
```
5. Server will restart automatically.
6. Perform the same request from step 3. Result will be:
```
{
"statusCode": 404,
"error": "Not Found",
"message": "Not Found"
}
```
**What is the expected behavior?**
Expected the same response as in step 3.
**Suggested solutions**
Have no idea. | priority | adding a file to api config folder disables those api informations node js version npm version strapi version alpha database operating system ubuntu what is the current behavior generated api returns response on all related routes steps to reproduce the problem install new project using command strapi new projectname perform commands cd projectname strapi generate api test strapi start make sure generated api is working correctly sending request from postman or browser get expected response statuscode error forbidden message forbidden we did not set up any permissions for this api therefore it s forbidden by default create file with name v js this could be any filename starting from letter v js etc at the folder project root api test config this file must export function like this function testfunction console log this function is not called module exports testfunction server will restart automatically perform the same request from step result will be statuscode error not found message not found what is the expected behavior expected the same response as in step suggested solutions have no idea | 1 |
790,329 | 27,822,948,852 | IssuesEvent | 2023-03-19 12:56:31 | bounswe/bounswe2023group2 | https://api.github.com/repos/bounswe/bounswe2023group2 | closed | Account Features | state: assigned priority: medium state: In progress effort: level 3 | Create mock ups for account features:
- [x] login page
- [x] sign up page
- [x] reset password page | 1.0 | Account Features - Create mock ups for account features:
- [x] login page
- [x] sign up page
- [x] reset password page | priority | account features create mock ups for account features login page sign up page reset password page | 1 |
53,796 | 3,050,857,315 | IssuesEvent | 2015-08-12 02:34:36 | yawlfoundation/yawl | https://api.github.com/repos/yawlfoundation/yawl | closed | Documentation: info on WSInvoker | auto-migrated Milestone-2.0 Priority-Medium Type-Enhancement | ```
Update the documentation for the WSInvoker by specifying that the following
restrictions apply to the web service that can be invoked by this custom
service:
- RPC-style only
- synchronous only
- input and output messages for the selected operation must have only one
part (to be verified)
- the part must be of type string, int, double, float, boolean and decimal
The attached is an example of this service
```
Original issue reported on code.google.com by `[email protected]` on 3 Apr 2009 at 8:13 | 1.0 | Documentation: info on WSInvoker - ```
Update the documentation for the WSInvoker by specifying that the following
restrictions apply to the web service that can be invoked by this custom
service:
- RPC-style only
- synchronous only
- input and output messages for the selected operation must have only one
part (to be verified)
- the part must be of type string, int, double, float, boolean and decimal
The attached is an example of this service
```
Original issue reported on code.google.com by `[email protected]` on 3 Apr 2009 at 8:13 | priority | documentation info on wsinvoker update the documentation for the wsinvoker by specifying that the following restrictions apply to the web service that can be invoked by this custom service rpc style only synchronous only input and output messages for the selected operation must have only one part to be verified the part must be of type string int double float boolean and decimal the attached is an example of this service original issue reported on code google com by marcello gmail com on apr at | 1 |
654,505 | 21,654,536,297 | IssuesEvent | 2022-05-06 12:55:24 | rocky-linux/rockylinux.org | https://api.github.com/repos/rocky-linux/rockylinux.org | closed | Translation Completed: Traditional Chinese | priority: medium tag: content type: enhancement | **Language Translated: Traditional Chinese
**Localized Language Name: Traditional Chinese
**ISO 639-1 Language Code: zh-tw
**Localized Date Format: `YYYY-MM-DD`
| 1.0 | Translation Completed: Traditional Chinese - **Language Translated: Traditional Chinese
**Localized Language Name: Traditional Chinese
**ISO 639-1 Language Code: zh-tw
**Localized Date Format: `YYYY-MM-DD`
| priority | translation completed traditional chinese language translated traditional chinese localized language name traditional chinese iso language code zh tw localized date format yyyy mm dd | 1 |
459,118 | 13,186,594,450 | IssuesEvent | 2020-08-13 00:40:52 | fog/fog-google | https://api.github.com/repos/fog/fog-google | closed | Incorrect metadata format in comment | docs priority/medium | I am having trouble setting the metadata for a server when I structure the metadata object according to documentation below:
https://github.com/fog/fog-google/blob/46d1bee9516bdbf3d020f012f7b7aee9039efd47/lib/fog/compute/google/requests/set_server_metadata.rb#L24
It seems like having a metadata object that includes an `items` key does not get parsed correctly. However, setting the object to what the value of `items` was seems to work. This also seems to be the way it is used in the test in:
https://github.com/fog/fog-google/blob/46d1bee9516bdbf3d020f012f7b7aee9039efd47/test/integration/compute/core_compute/test_servers.rb#L42
Or perhaps I am calling the method incorrectly? It throws the following error when including the `items` key:
```
/Users/user/.vagrant.d/gems/2.4.9/gems/google-api-client-0.23.9/generated/google/apis/compute_v1/classes.rb:11437:in
`initialize': wrong number of arguments (given 1, expected 0) (ArgumentError)
``` | 1.0 | Incorrect metadata format in comment - I am having trouble setting the metadata for a server when I structure the metadata object according to documentation below:
https://github.com/fog/fog-google/blob/46d1bee9516bdbf3d020f012f7b7aee9039efd47/lib/fog/compute/google/requests/set_server_metadata.rb#L24
It seems like having a metadata object that includes an `items` key does not get parsed correctly. However, setting the object to what the value of `items` was seems to work. This also seems to be the way it is used in the test in:
https://github.com/fog/fog-google/blob/46d1bee9516bdbf3d020f012f7b7aee9039efd47/test/integration/compute/core_compute/test_servers.rb#L42
Or perhaps I am calling the method incorrectly? It throws the following error when including the `items` key:
```
/Users/user/.vagrant.d/gems/2.4.9/gems/google-api-client-0.23.9/generated/google/apis/compute_v1/classes.rb:11437:in
`initialize': wrong number of arguments (given 1, expected 0) (ArgumentError)
``` | priority | incorrect metadata format in comment i am having trouble setting the metadata for a server when i structure the metadata object according to documentation below it seems like having a metadata object that includes an items key does not get parsed correctly however setting the object to what the value of items was seems to work this also seems to be the way it is used in the test in or perhaps i am calling the method incorrectly it throws the following error when including the items key users user vagrant d gems gems google api client generated google apis compute classes rb in initialize wrong number of arguments given expected argumenterror | 1 |
407,577 | 11,924,062,143 | IssuesEvent | 2020-04-01 08:55:00 | wazuh/wazuh-docker | https://api.github.com/repos/wazuh/wazuh-docker | closed | Use official nginx image without rebuilding | priority/medium status/on-hold type/enhancement | ### Description
Currently we're rebuilding the official nginx image, all we're doing is adding a couple of tools and an entrypoint which builds certificates and set things up before starting the web server.
Basically this:
```
RUN apt-get update && apt-get install -y openssl apache2-utils
COPY config/entrypoint.sh /entrypoint.sh
...
ENTRYPOINT /entrypoint.sh
```
This entrypoint will build a self signed certificate and the htpasswd file for basic auth, hence the installed packages `openssl` and `apache2-utils`.
Using the nginx image without rebuilding is possible, by mounting a ready made config directory with these files already.
#### Pros
- more freedom for users to define anything on the config
#### Cons
- build your own htpasswd/certificates
### Tasks
- [x] Create a sample config for nginx to mount
- [x] Rewrite nginx config on `docker-compose.yml` to use external config
- [x] Test execution of the nginx container
- [x] Test cluster execution | 1.0 | Use official nginx image without rebuilding - ### Description
Currently we're rebuilding the official nginx image, all we're doing is adding a couple of tools and an entrypoint which builds certificates and set things up before starting the web server.
Basically this:
```
RUN apt-get update && apt-get install -y openssl apache2-utils
COPY config/entrypoint.sh /entrypoint.sh
...
ENTRYPOINT /entrypoint.sh
```
This entrypoint will build a self signed certificate and the htpasswd file for basic auth, hence the installed packages `openssl` and `apache2-utils`.
Using the nginx image without rebuilding is possible, by mounting a ready made config directory with these files already.
#### Pros
- more freedom for users to define anything on the config
#### Cons
- build your own htpasswd/certificates
### Tasks
- [x] Create a sample config for nginx to mount
- [x] Rewrite nginx config on `docker-compose.yml` to use external config
- [x] Test execution of the nginx container
- [x] Test cluster execution | priority | use official nginx image without rebuilding description currently we re rebuilding the official nginx image all we re doing is adding a couple of tools and an entrypoint which builds certificates and set things up before starting the web server basically this run apt get update apt get install y openssl utils copy config entrypoint sh entrypoint sh entrypoint entrypoint sh this entrypoint will build a self signed certificate and the htpasswd file for basic auth hence the installed packages openssl and utils using the nginx image without rebuilding is possible by mounting a ready made config directory with these files already pros more freedom for users to define anything on the config cons build your own htpasswd certificates tasks create a sample config for nginx to mount rewrite nginx config on docker compose yml to use external config test execution of the nginx container test cluster execution | 1 |
665,510 | 22,320,538,850 | IssuesEvent | 2022-06-14 05:47:08 | opencrvs/opencrvs-core | https://api.github.com/repos/opencrvs/opencrvs-core | closed | system/browser default font shows before OpenCRVS Noto sans | 👹Bug Priority: medium | **Describe the bug**
- we get a flash of the system/broswer font before OpenCRVS font loads
- need find way to load in fonts before showing font end....
https://images.zenhubusercontent.com/91778759/4ff93853-0438-4771-8c4c-5ec6c2a2d8ad/fonts.mp4
**To Reproduce**
Steps to reproduce the behaviour:
1. Refresh page
1. Clicking on work-queues after a recent login
**Expected behaviour**
- you instantly see Noto Sans | 1.0 | system/browser default font shows before OpenCRVS Noto sans - **Describe the bug**
- we get a flash of the system/broswer font before OpenCRVS font loads
- need find way to load in fonts before showing font end....
https://images.zenhubusercontent.com/91778759/4ff93853-0438-4771-8c4c-5ec6c2a2d8ad/fonts.mp4
**To Reproduce**
Steps to reproduce the behaviour:
1. Refresh page
1. Clicking on work-queues after a recent login
**Expected behaviour**
- you instantly see Noto Sans | priority | system browser default font shows before opencrvs noto sans describe the bug we get a flash of the system broswer font before opencrvs font loads need find way to load in fonts before showing font end to reproduce steps to reproduce the behaviour refresh page clicking on work queues after a recent login expected behaviour you instantly see noto sans | 1 |
550,045 | 16,103,949,232 | IssuesEvent | 2021-04-27 12:57:13 | MattTheLegoman/RealmsInExile | https://api.github.com/repos/MattTheLegoman/RealmsInExile | closed | Sauron can lead armies | oddity priority: medium | On the subject of the Sauron leading armies question asked #discussion. I remember us mentioning it before, but do we want Sauron leading armies before he regains the One?
If we do, we could make a custom trait like "A Shadow of Malice" that disables army leading like Infirm does. Then when Sauron reclaims the one ring, the trait could be replaced by a new one that allows him to lead armies and gives him appropriate buffs. | 1.0 | Sauron can lead armies - On the subject of the Sauron leading armies question asked #discussion. I remember us mentioning it before, but do we want Sauron leading armies before he regains the One?
If we do, we could make a custom trait like "A Shadow of Malice" that disables army leading like Infirm does. Then when Sauron reclaims the one ring, the trait could be replaced by a new one that allows him to lead armies and gives him appropriate buffs. | priority | sauron can lead armies on the subject of the sauron leading armies question asked discussion i remember us mentioning it before but do we want sauron leading armies before he regains the one if we do we could make a custom trait like a shadow of malice that disables army leading like infirm does then when sauron reclaims the one ring the trait could be replaced by a new one that allows him to lead armies and gives him appropriate buffs | 1 |
362,509 | 10,728,550,217 | IssuesEvent | 2019-10-28 14:06:28 | ansible/awx | https://api.github.com/repos/ansible/awx | closed | A server error has occurred (/migrations_notran) | help wanted priority:medium state:needs_devel type:bug | ##### ISSUE TYPE
- Bug Report
##### SUMMARY
<!-- Briefly describe the problem. -->
After installing AWX from devel when you visit the AWX web page it displays `Server Error` and `A server error has occurred.`
##### ENVIRONMENT
* AWX version: devel (7.0.0)
* AWX install method: docker on linux
* Ansible version: 2.8.5
* Operating System: Debian 10 (Buster)
* Web Browser: Firefox 69.0.1
##### STEPS TO REPRODUCE
<!-- Please describe exactly how to reproduce the problem. -->
Run the installer freshly after modifying inventory file, when it finishes without error, AWX containers come up. When you try visiting the AWX web page it presents the internal error.
##### EXPECTED RESULTS
<!-- What did you expect to happen when running the steps above? -->
That it works perfectly as it did on 6.1.0.
##### ACTUAL RESULTS
<!-- What actually happened? -->
AWX seems to be in a broken state.
##### ADDITIONAL INFORMATION
<!-- Include any links to sosreport, database dumps, screenshots or other
information. -->
Logs from the `awx_task` container:
```
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/bin/awx-manage", line 11, in <module>
load_entry_point('awx==7.0.0.0', 'console_scripts', 'awx-manage')()
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/awx/__init__.py", line 142, in manage
execute_from_command_line(sys.argv)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/core/management/__init__.py", line 381, in execute_from_command_line
utility.execute()
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/core/management/__init__.py", line 375, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/core/management/base.py", line 323, in run_from_argv
self.execute(*args, **cmd_options)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/core/management/base.py", line 364, in execute
output = self.handle(*args, **options)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/awx/main/management/commands/run_dispatcher.py", line 123, in handle
reaper.reap()
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/awx/main/dispatch/reaper.py", line 36, in reap
me = instance or Instance.objects.me()
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/awx/main/managers.py", line 114, in me
if node.exists():
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/models/query.py", line 766, in exists
return self.query.has_results(using=self.db)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/models/sql/query.py", line 522, in has_results
return compiler.has_results()
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/models/sql/compiler.py", line 1070, in has_results
return bool(self.execute_sql(SINGLE))
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/models/sql/compiler.py", line 1100, in execute_sql
cursor.execute(sql, params)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/backends/utils.py", line 76, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/utils.py", line 89, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
django.db.utils.ProgrammingError: relation "main_instance" does not exist
LINE 1: SELECT (1) AS "a" FROM "main_instance" WHERE "main_instance"...
```
Logs from the `awx_web` container:
```
2019-09-30 17:21:04,972 ERROR django.request Internal Server Error: /migrations_notran/
Traceback (most recent call last):
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/contrib/sessions/backends/base.py", line 189, in _get_session
return self._session_cache
AttributeError: 'SessionStore' object has no attribute '_session_cache'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
psycopg2.ProgrammingError: relation "django_session" does not exist
LINE 1: ...ession_data", "django_session"."expire_date" FROM "django_se...
^
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/utils/deprecation.py", line 93, in __call__
response = self.process_request(request)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/middleware/locale.py", line 21, in process_request
language = translation.get_language_from_request(request, check_path=i18n_patterns_used)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/utils/translation/__init__.py", line 236, in get_language_from_request
return _trans.get_language_from_request(request, check_path)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/utils/translation/trans_real.py", line 463, in get_language_from_request
lang_code = request.session.get(LANGUAGE_SESSION_KEY)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/contrib/sessions/backends/base.py", line 65, in get
return self._session.get(key, default)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/contrib/sessions/backends/base.py", line 194, in _get_session
self._session_cache = self.load()
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/contrib/sessions/backends/db.py", line 43, in load
s = self._get_session_from_db()
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/contrib/sessions/backends/db.py", line 34, in _get_session_from_db
expire_date__gt=timezone.now()
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/models/manager.py", line 82, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/models/query.py", line 402, in get
num = len(clone)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/models/query.py", line 256, in __len__
self._fetch_all()
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/models/query.py", line 1242, in _fetch_all
self._result_cache = list(self._iterable_class(self))
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/models/query.py", line 55, in __iter__
results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/models/sql/compiler.py", line 1100, in execute_sql
cursor.execute(sql, params)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/backends/utils.py", line 76, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/utils.py", line 89, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
django.db.utils.ProgrammingError: relation "django_session" does not exist
LINE 1: ...ession_data", "django_session"."expire_date" FROM "django_se...
```
Logs from the `postgres` container:
```
2019-09-30 17:29:34.772 UTC [388] ERROR: relation "main_instance" does not exist at character 24
2019-09-30 17:29:34.772 UTC [388] STATEMENT: SELECT (1) AS "a" FROM "main_instance" WHERE "main_instance"."hostname" = 'awx' LIMIT 1
2019-09-30 17:29:37.764 UTC [389] ERROR: relation "conf_setting" does not exist at character 158
``` | 1.0 | A server error has occurred (/migrations_notran) - ##### ISSUE TYPE
- Bug Report
##### SUMMARY
<!-- Briefly describe the problem. -->
After installing AWX from devel when you visit the AWX web page it displays `Server Error` and `A server error has occurred.`
##### ENVIRONMENT
* AWX version: devel (7.0.0)
* AWX install method: docker on linux
* Ansible version: 2.8.5
* Operating System: Debian 10 (Buster)
* Web Browser: Firefox 69.0.1
##### STEPS TO REPRODUCE
<!-- Please describe exactly how to reproduce the problem. -->
Run the installer freshly after modifying inventory file, when it finishes without error, AWX containers come up. When you try visiting the AWX web page it presents the internal error.
##### EXPECTED RESULTS
<!-- What did you expect to happen when running the steps above? -->
That it works perfectly as it did on 6.1.0.
##### ACTUAL RESULTS
<!-- What actually happened? -->
AWX seems to be in a broken state.
##### ADDITIONAL INFORMATION
<!-- Include any links to sosreport, database dumps, screenshots or other
information. -->
Logs from the `awx_task` container:
```
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/bin/awx-manage", line 11, in <module>
load_entry_point('awx==7.0.0.0', 'console_scripts', 'awx-manage')()
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/awx/__init__.py", line 142, in manage
execute_from_command_line(sys.argv)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/core/management/__init__.py", line 381, in execute_from_command_line
utility.execute()
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/core/management/__init__.py", line 375, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/core/management/base.py", line 323, in run_from_argv
self.execute(*args, **cmd_options)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/core/management/base.py", line 364, in execute
output = self.handle(*args, **options)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/awx/main/management/commands/run_dispatcher.py", line 123, in handle
reaper.reap()
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/awx/main/dispatch/reaper.py", line 36, in reap
me = instance or Instance.objects.me()
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/awx/main/managers.py", line 114, in me
if node.exists():
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/models/query.py", line 766, in exists
return self.query.has_results(using=self.db)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/models/sql/query.py", line 522, in has_results
return compiler.has_results()
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/models/sql/compiler.py", line 1070, in has_results
return bool(self.execute_sql(SINGLE))
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/models/sql/compiler.py", line 1100, in execute_sql
cursor.execute(sql, params)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/backends/utils.py", line 76, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/utils.py", line 89, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
django.db.utils.ProgrammingError: relation "main_instance" does not exist
LINE 1: SELECT (1) AS "a" FROM "main_instance" WHERE "main_instance"...
```
Logs from the `awx_web` container:
```
2019-09-30 17:21:04,972 ERROR django.request Internal Server Error: /migrations_notran/
Traceback (most recent call last):
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/contrib/sessions/backends/base.py", line 189, in _get_session
return self._session_cache
AttributeError: 'SessionStore' object has no attribute '_session_cache'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
psycopg2.ProgrammingError: relation "django_session" does not exist
LINE 1: ...ession_data", "django_session"."expire_date" FROM "django_se...
^
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/utils/deprecation.py", line 93, in __call__
response = self.process_request(request)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/middleware/locale.py", line 21, in process_request
language = translation.get_language_from_request(request, check_path=i18n_patterns_used)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/utils/translation/__init__.py", line 236, in get_language_from_request
return _trans.get_language_from_request(request, check_path)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/utils/translation/trans_real.py", line 463, in get_language_from_request
lang_code = request.session.get(LANGUAGE_SESSION_KEY)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/contrib/sessions/backends/base.py", line 65, in get
return self._session.get(key, default)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/contrib/sessions/backends/base.py", line 194, in _get_session
self._session_cache = self.load()
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/contrib/sessions/backends/db.py", line 43, in load
s = self._get_session_from_db()
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/contrib/sessions/backends/db.py", line 34, in _get_session_from_db
expire_date__gt=timezone.now()
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/models/manager.py", line 82, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/models/query.py", line 402, in get
num = len(clone)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/models/query.py", line 256, in __len__
self._fetch_all()
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/models/query.py", line 1242, in _fetch_all
self._result_cache = list(self._iterable_class(self))
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/models/query.py", line 55, in __iter__
results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/models/sql/compiler.py", line 1100, in execute_sql
cursor.execute(sql, params)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/backends/utils.py", line 67, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/backends/utils.py", line 76, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/utils.py", line 89, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/var/lib/awx/venv/awx/lib64/python3.6/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
django.db.utils.ProgrammingError: relation "django_session" does not exist
LINE 1: ...ession_data", "django_session"."expire_date" FROM "django_se...
```
Logs from the `postgres` container:
```
2019-09-30 17:29:34.772 UTC [388] ERROR: relation "main_instance" does not exist at character 24
2019-09-30 17:29:34.772 UTC [388] STATEMENT: SELECT (1) AS "a" FROM "main_instance" WHERE "main_instance"."hostname" = 'awx' LIMIT 1
2019-09-30 17:29:37.764 UTC [389] ERROR: relation "conf_setting" does not exist at character 158
``` | priority | a server error has occurred migrations notran issue type bug report summary after installing awx from devel when you visit the awx web page it displays server error and a server error has occurred environment awx version devel awx install method docker on linux ansible version operating system debian buster web browser firefox steps to reproduce run the installer freshly after modifying inventory file when it finishes without error awx containers come up when you try visiting the awx web page it presents the internal error expected results that it works perfectly as it did on actual results awx seems to be in a broken state additional information include any links to sosreport database dumps screenshots or other information logs from the awx task container the above exception was the direct cause of the following exception traceback most recent call last file usr bin awx manage line in load entry point awx console scripts awx manage file var lib awx venv awx site packages awx init py line in manage execute from command line sys argv file var lib awx venv awx site packages django core management init py line in execute from command line utility execute file var lib awx venv awx site packages django core management init py line in execute self fetch command subcommand run from argv self argv file var lib awx venv awx site packages django core management base py line in run from argv self execute args cmd options file var lib awx venv awx site packages django core management base py line in execute output self handle args options file var lib awx venv awx site packages awx main management commands run dispatcher py line in handle reaper reap file var lib awx venv awx site packages awx main dispatch reaper py line in reap me instance or instance objects me file var lib awx venv awx site packages awx main managers py line in me if node exists file var lib awx venv awx site packages django db models query py line in exists return self query has results using self db file var lib awx venv awx site packages django db models sql query py line in has results return compiler has results file var lib awx venv awx site packages django db models sql compiler py line in has results return bool self execute sql single file var lib awx venv awx site packages django db models sql compiler py line in execute sql cursor execute sql params file var lib awx venv awx site packages django db backends utils py line in execute return self execute with wrappers sql params many false executor self execute file var lib awx venv awx site packages django db backends utils py line in execute with wrappers return executor sql params many context file var lib awx venv awx site packages django db backends utils py line in execute return self cursor execute sql params file var lib awx venv awx site packages django db utils py line in exit raise dj exc value with traceback traceback from exc value file var lib awx venv awx site packages django db backends utils py line in execute return self cursor execute sql params django db utils programmingerror relation main instance does not exist line select as a from main instance where main instance logs from the awx web container error django request internal server error migrations notran traceback most recent call last file var lib awx venv awx site packages django contrib sessions backends base py line in get session return self session cache attributeerror sessionstore object has no attribute session cache during handling of the above exception another exception occurred traceback most recent call last file var lib awx venv awx site packages django db backends utils py line in execute return self cursor execute sql params programmingerror relation django session does not exist line ession data django session expire date from django se the above exception was the direct cause of the following exception traceback most recent call last file var lib awx venv awx site packages django core handlers exception py line in inner response get response request file var lib awx venv awx site packages django utils deprecation py line in call response self process request request file var lib awx venv awx site packages django middleware locale py line in process request language translation get language from request request check path patterns used file var lib awx venv awx site packages django utils translation init py line in get language from request return trans get language from request request check path file var lib awx venv awx site packages django utils translation trans real py line in get language from request lang code request session get language session key file var lib awx venv awx site packages django contrib sessions backends base py line in get return self session get key default file var lib awx venv awx site packages django contrib sessions backends base py line in get session self session cache self load file var lib awx venv awx site packages django contrib sessions backends db py line in load s self get session from db file var lib awx venv awx site packages django contrib sessions backends db py line in get session from db expire date gt timezone now file var lib awx venv awx site packages django db models manager py line in manager method return getattr self get queryset name args kwargs file var lib awx venv awx site packages django db models query py line in get num len clone file var lib awx venv awx site packages django db models query py line in len self fetch all file var lib awx venv awx site packages django db models query py line in fetch all self result cache list self iterable class self file var lib awx venv awx site packages django db models query py line in iter results compiler execute sql chunked fetch self chunked fetch chunk size self chunk size file var lib awx venv awx site packages django db models sql compiler py line in execute sql cursor execute sql params file var lib awx venv awx site packages django db backends utils py line in execute return self execute with wrappers sql params many false executor self execute file var lib awx venv awx site packages django db backends utils py line in execute with wrappers return executor sql params many context file var lib awx venv awx site packages django db backends utils py line in execute return self cursor execute sql params file var lib awx venv awx site packages django db utils py line in exit raise dj exc value with traceback traceback from exc value file var lib awx venv awx site packages django db backends utils py line in execute return self cursor execute sql params django db utils programmingerror relation django session does not exist line ession data django session expire date from django se logs from the postgres container utc error relation main instance does not exist at character utc statement select as a from main instance where main instance hostname awx limit utc error relation conf setting does not exist at character | 1 |
68,494 | 3,288,810,621 | IssuesEvent | 2015-10-29 16:30:06 | cs2103aug2015-t13-2j/main | https://api.github.com/repos/cs2103aug2015-t13-2j/main | opened | Pop-ups will appear once a call for "archive", "search", as well as "help" is initiated by user input | priority.medium status.ongoing type.task | For help, since we have 6 different edit commands, we have decided to allow users to access the help guide specifically for our edit function through "help edit", which also will be mentioned when an user is unable to edit his tasks successfully. | 1.0 | Pop-ups will appear once a call for "archive", "search", as well as "help" is initiated by user input - For help, since we have 6 different edit commands, we have decided to allow users to access the help guide specifically for our edit function through "help edit", which also will be mentioned when an user is unable to edit his tasks successfully. | priority | pop ups will appear once a call for archive search as well as help is initiated by user input for help since we have different edit commands we have decided to allow users to access the help guide specifically for our edit function through help edit which also will be mentioned when an user is unable to edit his tasks successfully | 1 |
526,475 | 15,293,837,578 | IssuesEvent | 2021-02-24 01:08:02 | ijsto/strapi-plugin-migrate | https://api.github.com/repos/ijsto/strapi-plugin-migrate | closed | Add Readme instructions and docs | priority:medium | Add installation instructions and initialize basic documentation | 1.0 | Add Readme instructions and docs - Add installation instructions and initialize basic documentation | priority | add readme instructions and docs add installation instructions and initialize basic documentation | 1 |
614,333 | 19,179,815,789 | IssuesEvent | 2021-12-04 07:04:28 | cse110-fa21-group5/cse110-fa21-group5 | https://api.github.com/repos/cse110-fa21-group5/cse110-fa21-group5 | closed | API Integration: Nutrition facts on a recipe | type: feature ✨ point: 3 priority: medium | @Steven-Chang1114 @Xuan-Wang-Summer
- 1 serving size worth of nutrition for recipes | 1.0 | API Integration: Nutrition facts on a recipe - @Steven-Chang1114 @Xuan-Wang-Summer
- 1 serving size worth of nutrition for recipes | priority | api integration nutrition facts on a recipe steven xuan wang summer serving size worth of nutrition for recipes | 1 |
756,838 | 26,487,626,343 | IssuesEvent | 2023-01-17 19:27:45 | PrefectHQ/orion-design | https://api.github.com/repos/PrefectHQ/orion-design | closed | Remove 3 dot menu on Notifications if the user doesn't have permission to edit/delete | priority:medium Quick (should be a simple fix) |
![Image](https://user-images.githubusercontent.com/104096908/207693736-ce3c4825-50a9-445c-8eb7-95b43b0dc9de.png)
| 1.0 | Remove 3 dot menu on Notifications if the user doesn't have permission to edit/delete -
![Image](https://user-images.githubusercontent.com/104096908/207693736-ce3c4825-50a9-445c-8eb7-95b43b0dc9de.png)
| priority | remove dot menu on notifications if the user doesn t have permission to edit delete | 1 |
445,234 | 12,827,852,592 | IssuesEvent | 2020-07-06 19:19:52 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | opened | CW: user history | Category: Web Priority: Medium Type: Task | We need to store a history of worlds owned by users.
When a user buys a new world put it in a new table for this.
Should store userid, aws_name, serverID, region, world price, start date, and end date
When world expires and is removed by cron job. also mark the end_Date in the world history table. | 1.0 | CW: user history - We need to store a history of worlds owned by users.
When a user buys a new world put it in a new table for this.
Should store userid, aws_name, serverID, region, world price, start date, and end date
When world expires and is removed by cron job. also mark the end_Date in the world history table. | priority | cw user history we need to store a history of worlds owned by users when a user buys a new world put it in a new table for this should store userid aws name serverid region world price start date and end date when world expires and is removed by cron job also mark the end date in the world history table | 1 |
637,559 | 20,671,665,452 | IssuesEvent | 2022-03-10 03:19:38 | adirh3/Fluent-Search | https://api.github.com/repos/adirh3/Fluent-Search | closed | Icons showing up tiny in FS | bug Medium Priority UI/UX | I'm starting to think this might be a bug/glitch in FS?
Version:
![image](https://user-images.githubusercontent.com/1836362/149611804-b396dedd-6975-433c-bb28-4ed9def50ad1.png)
Fluent Search:
![image](https://user-images.githubusercontent.com/1836362/149611823-da7cb186-6d9e-4acf-8335-cc141ff346ba.png)
![image](https://user-images.githubusercontent.com/1836362/149611839-2e638382-f1f6-4985-8313-6538c66451e6.png)
![image](https://user-images.githubusercontent.com/1836362/149611850-d52bc91e-18bf-4d88-9d62-4796eeb794e9.png)
![image](https://user-images.githubusercontent.com/1836362/149611871-72026c3c-c473-4dde-b77f-35b16992eae5.png)
![image](https://user-images.githubusercontent.com/1836362/149611876-31f95d7f-e44f-4696-904b-fdd736c9ba35.png)
![image](https://user-images.githubusercontent.com/1836362/149611895-0c4e22d0-8593-44d7-b183-66702d3c0e53.png)
![image](https://user-images.githubusercontent.com/1836362/149612095-f1455c57-5afa-428b-b990-0cf050ad199c.png)
All of these icons are showing up properly in Listary, Here's one example:
![image](https://user-images.githubusercontent.com/1836362/149611914-5029390f-6070-453f-b550-60298b7c5a2c.png)
| 1.0 | Icons showing up tiny in FS - I'm starting to think this might be a bug/glitch in FS?
Version:
![image](https://user-images.githubusercontent.com/1836362/149611804-b396dedd-6975-433c-bb28-4ed9def50ad1.png)
Fluent Search:
![image](https://user-images.githubusercontent.com/1836362/149611823-da7cb186-6d9e-4acf-8335-cc141ff346ba.png)
![image](https://user-images.githubusercontent.com/1836362/149611839-2e638382-f1f6-4985-8313-6538c66451e6.png)
![image](https://user-images.githubusercontent.com/1836362/149611850-d52bc91e-18bf-4d88-9d62-4796eeb794e9.png)
![image](https://user-images.githubusercontent.com/1836362/149611871-72026c3c-c473-4dde-b77f-35b16992eae5.png)
![image](https://user-images.githubusercontent.com/1836362/149611876-31f95d7f-e44f-4696-904b-fdd736c9ba35.png)
![image](https://user-images.githubusercontent.com/1836362/149611895-0c4e22d0-8593-44d7-b183-66702d3c0e53.png)
![image](https://user-images.githubusercontent.com/1836362/149612095-f1455c57-5afa-428b-b990-0cf050ad199c.png)
All of these icons are showing up properly in Listary, Here's one example:
![image](https://user-images.githubusercontent.com/1836362/149611914-5029390f-6070-453f-b550-60298b7c5a2c.png)
| priority | icons showing up tiny in fs i m starting to think this might be a bug glitch in fs version fluent search all of these icons are showing up properly in listary here s one example | 1 |
213,007 | 7,244,906,940 | IssuesEvent | 2018-02-14 16:25:55 | marvinlabs/customer-area | https://api.github.com/repos/marvinlabs/customer-area | closed | Post category archive pages are not available | Priority - medium bug | To reproduce:
- Create a post
- Assign to a category
- Try to access the category archive for that category
- We get redirected to the WPCA dashboard instead of the blog category archive
Cf. shttps://wp-customerarea.com/fr/support/topic/redirection-des-nouvelles-categories/ | 1.0 | Post category archive pages are not available - To reproduce:
- Create a post
- Assign to a category
- Try to access the category archive for that category
- We get redirected to the WPCA dashboard instead of the blog category archive
Cf. shttps://wp-customerarea.com/fr/support/topic/redirection-des-nouvelles-categories/ | priority | post category archive pages are not available to reproduce create a post assign to a category try to access the category archive for that category we get redirected to the wpca dashboard instead of the blog category archive cf s | 1 |
428,221 | 12,404,922,079 | IssuesEvent | 2020-05-21 16:21:07 | DarshanShet777/Model-Airport | https://api.github.com/repos/DarshanShet777/Model-Airport | opened | Electrical Engineering: Checkpost IR Receiver Incorrect Readings | Medium Priority | The Checkposts cannot seem to sense the robot when it arrives. | 1.0 | Electrical Engineering: Checkpost IR Receiver Incorrect Readings - The Checkposts cannot seem to sense the robot when it arrives. | priority | electrical engineering checkpost ir receiver incorrect readings the checkposts cannot seem to sense the robot when it arrives | 1 |
117,951 | 4,729,562,141 | IssuesEvent | 2016-10-18 19:02:46 | Parabot/Parabot | https://api.github.com/repos/Parabot/Parabot | closed | New version notification | priority:medium status:accepted type:bug | When you get notified of a new version, it redirects you to the old BDN, should be v3.
This should also detect the version, nightly or not - whereas it will download a new nightly version if available, otherwise simply the latest. | 1.0 | New version notification - When you get notified of a new version, it redirects you to the old BDN, should be v3.
This should also detect the version, nightly or not - whereas it will download a new nightly version if available, otherwise simply the latest. | priority | new version notification when you get notified of a new version it redirects you to the old bdn should be this should also detect the version nightly or not whereas it will download a new nightly version if available otherwise simply the latest | 1 |
96,816 | 3,973,764,256 | IssuesEvent | 2016-05-04 19:45:50 | Metaswitch/sprout | https://api.github.com/repos/Metaswitch/sprout | closed | stack_data's name array is not necessarily big enough | bug medium-priority | #### Symptoms
The array was originally of size 16, and we were writing more than 16 names to it which was causing us to scribble on other fields.
#### Impact
Random failures due to fields being set to gibberish.
#### Release and environment
Release 95
#### Steps to reproduce
Add loads of aliases? | 1.0 | stack_data's name array is not necessarily big enough - #### Symptoms
The array was originally of size 16, and we were writing more than 16 names to it which was causing us to scribble on other fields.
#### Impact
Random failures due to fields being set to gibberish.
#### Release and environment
Release 95
#### Steps to reproduce
Add loads of aliases? | priority | stack data s name array is not necessarily big enough symptoms the array was originally of size and we were writing more than names to it which was causing us to scribble on other fields impact random failures due to fields being set to gibberish release and environment release steps to reproduce add loads of aliases | 1 |
57,223 | 3,081,249,717 | IssuesEvent | 2015-08-22 14:40:52 | pavel-pimenov/flylinkdc-r5xx | https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx | closed | Меню по правой кнопке на папке - копировать - имя ПАПКИ | bug imported Priority-Medium | _From [bobrikov](https://code.google.com/u/bobrikov/) on January 16, 2013 12:06:16_
Имхо нужно сделать
**Attachment:** [Имя-папки.png](http://code.google.com/p/flylinkdc/issues/detail?id=887)
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=887_ | 1.0 | Меню по правой кнопке на папке - копировать - имя ПАПКИ - _From [bobrikov](https://code.google.com/u/bobrikov/) on January 16, 2013 12:06:16_
Имхо нужно сделать
**Attachment:** [Имя-папки.png](http://code.google.com/p/flylinkdc/issues/detail?id=887)
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=887_ | priority | меню по правой кнопке на папке копировать имя папки from on january имхо нужно сделать attachment original issue | 1 |
550,147 | 16,105,703,830 | IssuesEvent | 2021-04-27 14:42:43 | sopra-fs21-group-08/sopra-fs21-group08-server | https://api.github.com/repos/sopra-fs21-group-08/sopra-fs21-group08-server | reopened | Create the Ticket interface and the individual tickets | medium priority task | time estimate: 5h
This task is part of User stories: #17 #9 #8 | 1.0 | Create the Ticket interface and the individual tickets - time estimate: 5h
This task is part of User stories: #17 #9 #8 | priority | create the ticket interface and the individual tickets time estimate this task is part of user stories | 1 |
592,368 | 17,876,838,284 | IssuesEvent | 2021-09-07 05:48:16 | annagabriel-hash/stock-transfer-app | https://api.github.com/repos/annagabriel-hash/stock-transfer-app | closed | Given that I'm a broker, when I go to the stocks page, and enter the amount of shares to be sold, the shares can be bought by the buyers. | App: Backend Priority: Medium State: In Progress Type: Feature | ### Tasks
- [x] Specify offer model
- [x] Offer table accepts stock id, user id, price, shares, status (open, closed, cancelled), order type (limit order and market order), action (buy or sell)
- [x] Specify trade model
- [x] Trade table accepts stock id, buyer_id, seller_id, price, shares, offer_ids
- [x] Given that I'm a broker, when I go to the stocks page, select market order, enter the amount of shares to be bought. The shares are sold at the current market price and my balance decreases based on the amount sold. | 1.0 | Given that I'm a broker, when I go to the stocks page, and enter the amount of shares to be sold, the shares can be bought by the buyers. - ### Tasks
- [x] Specify offer model
- [x] Offer table accepts stock id, user id, price, shares, status (open, closed, cancelled), order type (limit order and market order), action (buy or sell)
- [x] Specify trade model
- [x] Trade table accepts stock id, buyer_id, seller_id, price, shares, offer_ids
- [x] Given that I'm a broker, when I go to the stocks page, select market order, enter the amount of shares to be bought. The shares are sold at the current market price and my balance decreases based on the amount sold. | priority | given that i m a broker when i go to the stocks page and enter the amount of shares to be sold the shares can be bought by the buyers tasks specify offer model offer table accepts stock id user id price shares status open closed cancelled order type limit order and market order action buy or sell specify trade model trade table accepts stock id buyer id seller id price shares offer ids given that i m a broker when i go to the stocks page select market order enter the amount of shares to be bought the shares are sold at the current market price and my balance decreases based on the amount sold | 1 |
408,650 | 11,950,305,705 | IssuesEvent | 2020-04-03 14:59:57 | JensenJ/SpaceSurvival | https://api.github.com/repos/JensenJ/SpaceSurvival | closed | [BUG] Ship Spawn Syncing Transform | bug medium-priority sync-issue | **Describe the bug**
If a player joins after a ship or another object has been spawned, they do not have the ship in the correct position in their world when they join the host player. This is because the parent of the ship object has not been set as the ship spawning object. This bug does not occur if the player is in the world when the ship is spawned however.
**To Reproduce**
Steps to reproduce the behaviour:
1. Host a game
2. Get host to spawn ship
3. Join host's game as client
4. Observe the host's ship rotated strangely and in a different position to the host's version.
**Expected behaviour**
When the player joins the game, the ship position, rotation and parentage in the hierarchy should be synced to prevent this issue from occurring. As a result of this, the behaviour should be corrected.
**Screenshots**
Host Ship:
<img width="572" alt="host" src="https://user-images.githubusercontent.com/13812018/78306143-8f3e5000-753a-11ea-9e47-e69bc1ab1d8c.png">
Client Ship:
<img width="725" alt="client" src="https://user-images.githubusercontent.com/13812018/78306165-9d8c6c00-753a-11ea-916b-b6fe5f71d8f0.png">
| 1.0 | [BUG] Ship Spawn Syncing Transform - **Describe the bug**
If a player joins after a ship or another object has been spawned, they do not have the ship in the correct position in their world when they join the host player. This is because the parent of the ship object has not been set as the ship spawning object. This bug does not occur if the player is in the world when the ship is spawned however.
**To Reproduce**
Steps to reproduce the behaviour:
1. Host a game
2. Get host to spawn ship
3. Join host's game as client
4. Observe the host's ship rotated strangely and in a different position to the host's version.
**Expected behaviour**
When the player joins the game, the ship position, rotation and parentage in the hierarchy should be synced to prevent this issue from occurring. As a result of this, the behaviour should be corrected.
**Screenshots**
Host Ship:
<img width="572" alt="host" src="https://user-images.githubusercontent.com/13812018/78306143-8f3e5000-753a-11ea-9e47-e69bc1ab1d8c.png">
Client Ship:
<img width="725" alt="client" src="https://user-images.githubusercontent.com/13812018/78306165-9d8c6c00-753a-11ea-916b-b6fe5f71d8f0.png">
| priority | ship spawn syncing transform describe the bug if a player joins after a ship or another object has been spawned they do not have the ship in the correct position in their world when they join the host player this is because the parent of the ship object has not been set as the ship spawning object this bug does not occur if the player is in the world when the ship is spawned however to reproduce steps to reproduce the behaviour host a game get host to spawn ship join host s game as client observe the host s ship rotated strangely and in a different position to the host s version expected behaviour when the player joins the game the ship position rotation and parentage in the hierarchy should be synced to prevent this issue from occurring as a result of this the behaviour should be corrected screenshots host ship img width alt host src client ship img width alt client src | 1 |
553,649 | 16,376,097,025 | IssuesEvent | 2021-05-16 05:36:16 | KShewengger/benefit-management | https://api.github.com/repos/KShewengger/benefit-management | opened | Setup Vouchers Module, Initial Functionality and Route | Priority: Medium Type: API Type: Functionality | Common:
- Entity
- Model
- Type
Providers:
- Resolver
- Service
Core:
- Controller
- Module
Routes:
`/vouchers` | 1.0 | Setup Vouchers Module, Initial Functionality and Route - Common:
- Entity
- Model
- Type
Providers:
- Resolver
- Service
Core:
- Controller
- Module
Routes:
`/vouchers` | priority | setup vouchers module initial functionality and route common entity model type providers resolver service core controller module routes vouchers | 1 |
447,286 | 12,887,563,861 | IssuesEvent | 2020-07-13 11:28:31 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | [0.9.0 staging-1634] Coroutine couldn't be started | Category: Tech Priority: Medium Status: Fixed Week Task | When you go to main menu:
```
Coroutine couldn't be started because the the game object 'News(Clone)' is inactive!
News.NewsRenderer:DoRendering(News)
News.NewsController:SetNews(List`1)
News.<SetNews>d__4:MoveNext()
UnityEngine.SetupCoroutine:InvokeMoveNext(IEnumerator, IntPtr)
``` | 1.0 | [0.9.0 staging-1634] Coroutine couldn't be started - When you go to main menu:
```
Coroutine couldn't be started because the the game object 'News(Clone)' is inactive!
News.NewsRenderer:DoRendering(News)
News.NewsController:SetNews(List`1)
News.<SetNews>d__4:MoveNext()
UnityEngine.SetupCoroutine:InvokeMoveNext(IEnumerator, IntPtr)
``` | priority | coroutine couldn t be started when you go to main menu coroutine couldn t be started because the the game object news clone is inactive news newsrenderer dorendering news news newscontroller setnews list news d movenext unityengine setupcoroutine invokemovenext ienumerator intptr | 1 |
802,865 | 29,047,860,883 | IssuesEvent | 2023-05-13 20:06:58 | vdjagilev/nmap-formatter | https://api.github.com/repos/vdjagilev/nmap-formatter | closed | Update Node.js 12 -> 16 for pipelines | priority/medium type/other prop/pipeline | ```
Node.js 12 actions are deprecated. Please update the following actions to use Node.js 16: actions/checkout@v2. For more information see: https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead-of-node12/
```
Need to upgrade github actions to version 16 | 1.0 | Update Node.js 12 -> 16 for pipelines - ```
Node.js 12 actions are deprecated. Please update the following actions to use Node.js 16: actions/checkout@v2. For more information see: https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead-of-node12/
```
Need to upgrade github actions to version 16 | priority | update node js for pipelines node js actions are deprecated please update the following actions to use node js actions checkout for more information see need to upgrade github actions to version | 1 |
709,164 | 24,369,190,466 | IssuesEvent | 2022-10-03 17:42:05 | Chatterino/chatterino2 | https://api.github.com/repos/Chatterino/chatterino2 | closed | Migrate /subscribersoff command to Helix API | Platform: Twitch Priority: Medium Deprecation: Twitch IRC Commands hacktoberfest | As part of Twitch's announced deprecation of IRC-based commands ([see here for more info](https://discuss.dev.twitch.tv/t/deprecation-of-chat-commands-through-irc/40486), the `/subscribersoff` command needs to be migrated to use the relevant Helix API endpoint.
Helix API reference: https://dev.twitch.tv/docs/api/reference#update-chat-settings
Split from #3956 | 1.0 | Migrate /subscribersoff command to Helix API - As part of Twitch's announced deprecation of IRC-based commands ([see here for more info](https://discuss.dev.twitch.tv/t/deprecation-of-chat-commands-through-irc/40486), the `/subscribersoff` command needs to be migrated to use the relevant Helix API endpoint.
Helix API reference: https://dev.twitch.tv/docs/api/reference#update-chat-settings
Split from #3956 | priority | migrate subscribersoff command to helix api as part of twitch s announced deprecation of irc based commands the subscribersoff command needs to be migrated to use the relevant helix api endpoint helix api reference split from | 1 |
240,595 | 7,803,312,842 | IssuesEvent | 2018-06-10 22:18:13 | DigitalCampus/django-oppia | https://api.github.com/repos/DigitalCampus/django-oppia | closed | When creating a "ResponseResource" object via the API not all fields are required | invalid medium priority | they should be!
| 1.0 | When creating a "ResponseResource" object via the API not all fields are required - they should be!
| priority | when creating a responseresource object via the api not all fields are required they should be | 1 |
85,136 | 3,686,993,306 | IssuesEvent | 2016-02-25 05:13:48 | cs2103jan2016-w13-4j/main | https://api.github.com/repos/cs2103jan2016-w13-4j/main | closed | Create method that allows removing one tag from a task | component.storage priority.medium | Something like `Task.removeTag(int id, String tag)` | 1.0 | Create method that allows removing one tag from a task - Something like `Task.removeTag(int id, String tag)` | priority | create method that allows removing one tag from a task something like task removetag int id string tag | 1 |
831,536 | 32,051,727,476 | IssuesEvent | 2023-09-23 16:33:41 | space-wizards/space-station-14 | https://api.github.com/repos/space-wizards/space-station-14 | closed | Action Containers broke spellbooks + exception | Issue: Bug Priority: 2-Before Release Difficulty: 2-Medium | ## Description
<!-- Explain your issue in detail. Issues without proper explanation are liable to be closed by maintainers. -->
An exception is thrown when trying to learn spells from spellbooks. The recent action container PR broke this behavior: #20260
**Reproduction**
<!-- Include the steps to reproduce if applicable. -->
Pick up a spellbook and try to learn it, console will show the exception.
**Screenshots**
<!-- If applicable, add screenshots to help explain your problem. -->
**Additional context**
<!-- Add any other context about the problem here. Anything you think is related to the issue. -->
> [ERRO] system.actions: Can't resolve "Content.Shared.Actions.ActionsContainerComponent" on entity 1985!
> at Content.Shared.Actions.SharedActionsSystem.GrantActions(EntityUid performer, IEnumerable`1 actions, EntityUid container, ActionsComponent comp, ActionsContainerComponent containerComp) in J:\Program Files\SS14CloneNew\space-station-14\Content.Shared\Actions\SharedActionsSystem.cs:l
> ine 523
> at Content.Shared.Magic.SharedMagicSystem.OnDoAfter(EntityUid uid, SpellbookComponent component, DoAfterEvent args) in J:\Program Files\SS14CloneNew\space-station-14\Content.Shared\Magic\SharedMagicSystem.cs:line 85
> at Robust.Shared.GameObjects.EntityEventBus.<>c__DisplayClass47_0`2.<SubscribeLocalEvent>g__EventHandler|0(EntityUid uid, IComponent comp, TEvent& args) in J:\Program Files\SS14CloneNew\space-station-14\RobustToolbox\Robust.Shared\GameObjects\EntityEventBus.Directed.cs:line 254
> at Robust.Shared.GameObjects.EntityEventBus.<>c__DisplayClass57_0`1.<EntSubscribe>b__0(EntityUid uid, IComponent comp, Unit& ev) in J:\Program Files\SS14CloneNew\space-station-14\RobustToolbox\Robust.Shared\GameObjects\EntityEventBus.Directed.cs:line 387
> at Robust.Shared.GameObjects.EntityEventBus.EntDispatch(EntityUid euid, Type eventType, Unit& args, Boolean dispatchByReference) in J:\Program Files\SS14CloneNew\space-station-14\RobustToolbox\Robust.Shared\GameObjects\EntityEventBus.Directed.cs:line 541
> at Robust.Shared.GameObjects.EntityEventBus.RaiseLocalEventCore(EntityUid uid, Unit& unitRef, Type type, Boolean broadcast, Boolean byRef) in J:\Program Files\SS14CloneNew\space-station-14\RobustToolbox\Robust.Shared\GameObjects\EntityEventBus.Directed.cs:line 221
> at Robust.Shared.GameObjects.EntityEventBus.RaiseLocalEvent(EntityUid uid, Object args, Boolean broadcast) in J:\Program Files\SS14CloneNew\space-station-14\RobustToolbox\Robust.Shared\GameObjects\EntityEventBus.Directed.cs:line 190
> at Robust.Shared.GameObjects.EntitySystem.RaiseLocalEvent(EntityUid uid, Object args, Boolean broadcast) in J:\Program Files\SS14CloneNew\space-station-14\RobustToolbox\Robust.Shared\GameObjects\EntitySystem.cs:line 155
> at Content.Shared.DoAfter.SharedDoAfterSystem.RaiseDoAfterEvents(DoAfter doAfter, DoAfterComponent component) in J:\Program Files\SS14CloneNew\space-station-14\Content.Shared\DoAfter\SharedDoAfterSystem.cs:line 93
> at Content.Shared.DoAfter.SharedDoAfterSystem.TryComplete(DoAfter doAfter, DoAfterComponent component) in J:\Program Files\SS14CloneNew\space-station-14\Content.Shared\DoAfter\SharedDoAfterSystem.Update.cs:line 123
> at Content.Shared.DoAfter.SharedDoAfterSystem.Update(EntityUid uid, ActiveDoAfterComponent active, DoAfterComponent comp, TimeSpan time, EntityQuery`1 xformQuery, EntityQuery`1 handsQuery) in J:\Program Files\SS14CloneNew\space-station-14\Content.Shared\DoAfter\SharedDoAfterSystem.Upd
> ate.cs:line 68
> at Content.Shared.DoAfter.SharedDoAfterSystem.Update(Single frameTime) in J:\Program Files\SS14CloneNew\space-station-14\Content.Shared\DoAfter\SharedDoAfterSystem.Update.cs:line 23
> at Robust.Shared.GameObjects.EntitySystemManager.TickUpdate(Single frameTime, Boolean noPredictions) in J:\Program Files\SS14CloneNew\space-station-14\RobustToolbox\Robust.Shared\GameObjects\EntitySystemManager.cs:line 306
> at Robust.Shared.GameObjects.EntityManager.TickUpdate(Single frameTime, Boolean noPredictions, Histogram histogram) in J:\Program Files\SS14CloneNew\space-station-14\RobustToolbox\Robust.Shared\GameObjects\EntityManager.cs:line 278
> at Robust.Server.GameObjects.ServerEntityManager.TickUpdate(Single frameTime, Boolean noPredictions, Histogram histogram) in J:\Program Files\SS14CloneNew\space-station-14\RobustToolbox\Robust.Server\GameObjects\ServerEntityManager.cs:line 158
> at Robust.Server.BaseServer.Update(FrameEventArgs frameEventArgs) in J:\Program Files\SS14CloneNew\space-station-14\RobustToolbox\Robust.Server\BaseServer.cs:line 719
> at Robust.Server.BaseServer.<SetupMainLoop>b__66_1(Object sender, FrameEventArgs args) in J:\Program Files\SS14CloneNew\space-station-14\RobustToolbox\Robust.Server\BaseServer.cs:line 536
> at Robust.Shared.Timing.GameLoop.Run() in J:\Program Files\SS14CloneNew\space-station-14\RobustToolbox\Robust.Shared\Timing\GameLoop.cs:line 235
> at Robust.Server.BaseServer.MainLoop() in J:\Program Files\SS14CloneNew\space-station-14\RobustToolbox\Robust.Server\BaseServer.cs:line 563
> at Robust.Server.Program.ParsedMain(CommandLineArgs args, Boolean contentStart, ServerOptions options) in J:\Program Files\SS14CloneNew\space-station-14\RobustToolbox\Robust.Server\Program.cs:line 78
> at Robust.Server.Program.Start(String[] args, ServerOptions options, Boolean contentStart) in J:\Program Files\SS14CloneNew\space-station-14\RobustToolbox\Robust.Server\Program.cs:line 46
> at Robust.Server.ContentStart.Start(String[] args) in J:\Program Files\SS14CloneNew\space-station-14\RobustToolbox\Robust.Server\ContentStart.cs:line 10
> at Content.Server.Program.Main(String[] args) in J:\Program Files\SS14CloneNew\space-station-14\Content.Server\Program.cs:line 9
| 1.0 | Action Containers broke spellbooks + exception - ## Description
<!-- Explain your issue in detail. Issues without proper explanation are liable to be closed by maintainers. -->
An exception is thrown when trying to learn spells from spellbooks. The recent action container PR broke this behavior: #20260
**Reproduction**
<!-- Include the steps to reproduce if applicable. -->
Pick up a spellbook and try to learn it, console will show the exception.
**Screenshots**
<!-- If applicable, add screenshots to help explain your problem. -->
**Additional context**
<!-- Add any other context about the problem here. Anything you think is related to the issue. -->
> [ERRO] system.actions: Can't resolve "Content.Shared.Actions.ActionsContainerComponent" on entity 1985!
> at Content.Shared.Actions.SharedActionsSystem.GrantActions(EntityUid performer, IEnumerable`1 actions, EntityUid container, ActionsComponent comp, ActionsContainerComponent containerComp) in J:\Program Files\SS14CloneNew\space-station-14\Content.Shared\Actions\SharedActionsSystem.cs:l
> ine 523
> at Content.Shared.Magic.SharedMagicSystem.OnDoAfter(EntityUid uid, SpellbookComponent component, DoAfterEvent args) in J:\Program Files\SS14CloneNew\space-station-14\Content.Shared\Magic\SharedMagicSystem.cs:line 85
> at Robust.Shared.GameObjects.EntityEventBus.<>c__DisplayClass47_0`2.<SubscribeLocalEvent>g__EventHandler|0(EntityUid uid, IComponent comp, TEvent& args) in J:\Program Files\SS14CloneNew\space-station-14\RobustToolbox\Robust.Shared\GameObjects\EntityEventBus.Directed.cs:line 254
> at Robust.Shared.GameObjects.EntityEventBus.<>c__DisplayClass57_0`1.<EntSubscribe>b__0(EntityUid uid, IComponent comp, Unit& ev) in J:\Program Files\SS14CloneNew\space-station-14\RobustToolbox\Robust.Shared\GameObjects\EntityEventBus.Directed.cs:line 387
> at Robust.Shared.GameObjects.EntityEventBus.EntDispatch(EntityUid euid, Type eventType, Unit& args, Boolean dispatchByReference) in J:\Program Files\SS14CloneNew\space-station-14\RobustToolbox\Robust.Shared\GameObjects\EntityEventBus.Directed.cs:line 541
> at Robust.Shared.GameObjects.EntityEventBus.RaiseLocalEventCore(EntityUid uid, Unit& unitRef, Type type, Boolean broadcast, Boolean byRef) in J:\Program Files\SS14CloneNew\space-station-14\RobustToolbox\Robust.Shared\GameObjects\EntityEventBus.Directed.cs:line 221
> at Robust.Shared.GameObjects.EntityEventBus.RaiseLocalEvent(EntityUid uid, Object args, Boolean broadcast) in J:\Program Files\SS14CloneNew\space-station-14\RobustToolbox\Robust.Shared\GameObjects\EntityEventBus.Directed.cs:line 190
> at Robust.Shared.GameObjects.EntitySystem.RaiseLocalEvent(EntityUid uid, Object args, Boolean broadcast) in J:\Program Files\SS14CloneNew\space-station-14\RobustToolbox\Robust.Shared\GameObjects\EntitySystem.cs:line 155
> at Content.Shared.DoAfter.SharedDoAfterSystem.RaiseDoAfterEvents(DoAfter doAfter, DoAfterComponent component) in J:\Program Files\SS14CloneNew\space-station-14\Content.Shared\DoAfter\SharedDoAfterSystem.cs:line 93
> at Content.Shared.DoAfter.SharedDoAfterSystem.TryComplete(DoAfter doAfter, DoAfterComponent component) in J:\Program Files\SS14CloneNew\space-station-14\Content.Shared\DoAfter\SharedDoAfterSystem.Update.cs:line 123
> at Content.Shared.DoAfter.SharedDoAfterSystem.Update(EntityUid uid, ActiveDoAfterComponent active, DoAfterComponent comp, TimeSpan time, EntityQuery`1 xformQuery, EntityQuery`1 handsQuery) in J:\Program Files\SS14CloneNew\space-station-14\Content.Shared\DoAfter\SharedDoAfterSystem.Upd
> ate.cs:line 68
> at Content.Shared.DoAfter.SharedDoAfterSystem.Update(Single frameTime) in J:\Program Files\SS14CloneNew\space-station-14\Content.Shared\DoAfter\SharedDoAfterSystem.Update.cs:line 23
> at Robust.Shared.GameObjects.EntitySystemManager.TickUpdate(Single frameTime, Boolean noPredictions) in J:\Program Files\SS14CloneNew\space-station-14\RobustToolbox\Robust.Shared\GameObjects\EntitySystemManager.cs:line 306
> at Robust.Shared.GameObjects.EntityManager.TickUpdate(Single frameTime, Boolean noPredictions, Histogram histogram) in J:\Program Files\SS14CloneNew\space-station-14\RobustToolbox\Robust.Shared\GameObjects\EntityManager.cs:line 278
> at Robust.Server.GameObjects.ServerEntityManager.TickUpdate(Single frameTime, Boolean noPredictions, Histogram histogram) in J:\Program Files\SS14CloneNew\space-station-14\RobustToolbox\Robust.Server\GameObjects\ServerEntityManager.cs:line 158
> at Robust.Server.BaseServer.Update(FrameEventArgs frameEventArgs) in J:\Program Files\SS14CloneNew\space-station-14\RobustToolbox\Robust.Server\BaseServer.cs:line 719
> at Robust.Server.BaseServer.<SetupMainLoop>b__66_1(Object sender, FrameEventArgs args) in J:\Program Files\SS14CloneNew\space-station-14\RobustToolbox\Robust.Server\BaseServer.cs:line 536
> at Robust.Shared.Timing.GameLoop.Run() in J:\Program Files\SS14CloneNew\space-station-14\RobustToolbox\Robust.Shared\Timing\GameLoop.cs:line 235
> at Robust.Server.BaseServer.MainLoop() in J:\Program Files\SS14CloneNew\space-station-14\RobustToolbox\Robust.Server\BaseServer.cs:line 563
> at Robust.Server.Program.ParsedMain(CommandLineArgs args, Boolean contentStart, ServerOptions options) in J:\Program Files\SS14CloneNew\space-station-14\RobustToolbox\Robust.Server\Program.cs:line 78
> at Robust.Server.Program.Start(String[] args, ServerOptions options, Boolean contentStart) in J:\Program Files\SS14CloneNew\space-station-14\RobustToolbox\Robust.Server\Program.cs:line 46
> at Robust.Server.ContentStart.Start(String[] args) in J:\Program Files\SS14CloneNew\space-station-14\RobustToolbox\Robust.Server\ContentStart.cs:line 10
> at Content.Server.Program.Main(String[] args) in J:\Program Files\SS14CloneNew\space-station-14\Content.Server\Program.cs:line 9
| priority | action containers broke spellbooks exception description an exception is thrown when trying to learn spells from spellbooks the recent action container pr broke this behavior reproduction pick up a spellbook and try to learn it console will show the exception screenshots additional context system actions can t resolve content shared actions actionscontainercomponent on entity at content shared actions sharedactionssystem grantactions entityuid performer ienumerable actions entityuid container actionscomponent comp actionscontainercomponent containercomp in j program files space station content shared actions sharedactionssystem cs l ine at content shared magic sharedmagicsystem ondoafter entityuid uid spellbookcomponent component doafterevent args in j program files space station content shared magic sharedmagicsystem cs line at robust shared gameobjects entityeventbus c g eventhandler entityuid uid icomponent comp tevent args in j program files space station robusttoolbox robust shared gameobjects entityeventbus directed cs line at robust shared gameobjects entityeventbus c b entityuid uid icomponent comp unit ev in j program files space station robusttoolbox robust shared gameobjects entityeventbus directed cs line at robust shared gameobjects entityeventbus entdispatch entityuid euid type eventtype unit args boolean dispatchbyreference in j program files space station robusttoolbox robust shared gameobjects entityeventbus directed cs line at robust shared gameobjects entityeventbus raiselocaleventcore entityuid uid unit unitref type type boolean broadcast boolean byref in j program files space station robusttoolbox robust shared gameobjects entityeventbus directed cs line at robust shared gameobjects entityeventbus raiselocalevent entityuid uid object args boolean broadcast in j program files space station robusttoolbox robust shared gameobjects entityeventbus directed cs line at robust shared gameobjects entitysystem raiselocalevent entityuid uid object args boolean broadcast in j program files space station robusttoolbox robust shared gameobjects entitysystem cs line at content shared doafter shareddoaftersystem raisedoafterevents doafter doafter doaftercomponent component in j program files space station content shared doafter shareddoaftersystem cs line at content shared doafter shareddoaftersystem trycomplete doafter doafter doaftercomponent component in j program files space station content shared doafter shareddoaftersystem update cs line at content shared doafter shareddoaftersystem update entityuid uid activedoaftercomponent active doaftercomponent comp timespan time entityquery xformquery entityquery handsquery in j program files space station content shared doafter shareddoaftersystem upd ate cs line at content shared doafter shareddoaftersystem update single frametime in j program files space station content shared doafter shareddoaftersystem update cs line at robust shared gameobjects entitysystemmanager tickupdate single frametime boolean nopredictions in j program files space station robusttoolbox robust shared gameobjects entitysystemmanager cs line at robust shared gameobjects entitymanager tickupdate single frametime boolean nopredictions histogram histogram in j program files space station robusttoolbox robust shared gameobjects entitymanager cs line at robust server gameobjects serverentitymanager tickupdate single frametime boolean nopredictions histogram histogram in j program files space station robusttoolbox robust server gameobjects serverentitymanager cs line at robust server baseserver update frameeventargs frameeventargs in j program files space station robusttoolbox robust server baseserver cs line at robust server baseserver b object sender frameeventargs args in j program files space station robusttoolbox robust server baseserver cs line at robust shared timing gameloop run in j program files space station robusttoolbox robust shared timing gameloop cs line at robust server baseserver mainloop in j program files space station robusttoolbox robust server baseserver cs line at robust server program parsedmain commandlineargs args boolean contentstart serveroptions options in j program files space station robusttoolbox robust server program cs line at robust server program start string args serveroptions options boolean contentstart in j program files space station robusttoolbox robust server program cs line at robust server contentstart start string args in j program files space station robusttoolbox robust server contentstart cs line at content server program main string args in j program files space station content server program cs line | 1 |
490,200 | 14,116,621,813 | IssuesEvent | 2020-11-08 04:19:50 | AY2021S1-CS2103-T16-2/tp | https://api.github.com/repos/AY2021S1-CS2103-T16-2/tp | opened | Check Code Quality for Joven's Features | priority.Medium | Will only close when the documentation for code quality improvements are made. | 1.0 | Check Code Quality for Joven's Features - Will only close when the documentation for code quality improvements are made. | priority | check code quality for joven s features will only close when the documentation for code quality improvements are made | 1 |
1,478 | 2,514,730,315 | IssuesEvent | 2015-01-15 14:03:16 | OCHA-DAP/hdx-ckan | https://api.github.com/repos/OCHA-DAP/hdx-ckan | opened | Default Country Page: map | Priority-Medium | Blocked by #2102 and #2103
Annotated (very annotated) design is here:
https://docs.google.com/drawings/d/1qOBKZ7IO7zkEMHh2g3ZiAynh5PiAbO-_-SL4rd9uO_M/edit
Implement the map section | 1.0 | Default Country Page: map - Blocked by #2102 and #2103
Annotated (very annotated) design is here:
https://docs.google.com/drawings/d/1qOBKZ7IO7zkEMHh2g3ZiAynh5PiAbO-_-SL4rd9uO_M/edit
Implement the map section | priority | default country page map blocked by and annotated very annotated design is here implement the map section | 1 |
257,581 | 8,139,280,981 | IssuesEvent | 2018-08-20 17:11:27 | nprapps/elections18-general | https://api.github.com/repos/nprapps/elections18-general | closed | Check in on admin panel | effort:medium priority:high | We'll need to touch up our logic around "who's won the chamber" and handle a few other related aspects, such as whether it can show two Senate races in a single state. | 1.0 | Check in on admin panel - We'll need to touch up our logic around "who's won the chamber" and handle a few other related aspects, such as whether it can show two Senate races in a single state. | priority | check in on admin panel we ll need to touch up our logic around who s won the chamber and handle a few other related aspects such as whether it can show two senate races in a single state | 1 |
58,155 | 3,087,857,656 | IssuesEvent | 2015-08-25 14:05:15 | pavel-pimenov/flylinkdc-r5xx | https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx | opened | Плохо работает разбор вложенных BB-кодов | bug imported Priority-Medium | _From [[email protected]](https://code.google.com/u/101495626515388303633/) on November 09, 2013 00:47:35_
Точно не работает
1. Все тэги стиля текста вложенные в тэг цвета
2. Не работает цвет вложенный в другой цвет
3. отображение стилей зависит от порядка применения тэгов
[i][b]болд италик[/b][/i] - отображает только курсив
[b][i]италик болд[/i][/b] - показывает жирный курсив
Кто еще что найдет, дополняйте.
Но лечить надо не указанные ошибки по-отдельности а весь алгоритм разбора который сводится
к разбиению строки на участки и назначении каждому участку правильных атрибутов (я насчитал 5 штук) с учетом вложения тэгов (например с применением какого-то стекового принципа)
**Attachment:** [Fly_r15987_BBcodes.png Fly_r15988_BBcodes.png](http://code.google.com/p/flylinkdc/issues/detail?id=1391)
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=1391_ | 1.0 | Плохо работает разбор вложенных BB-кодов - _From [[email protected]](https://code.google.com/u/101495626515388303633/) on November 09, 2013 00:47:35_
Точно не работает
1. Все тэги стиля текста вложенные в тэг цвета
2. Не работает цвет вложенный в другой цвет
3. отображение стилей зависит от порядка применения тэгов
[i][b]болд италик[/b][/i] - отображает только курсив
[b][i]италик болд[/i][/b] - показывает жирный курсив
Кто еще что найдет, дополняйте.
Но лечить надо не указанные ошибки по-отдельности а весь алгоритм разбора который сводится
к разбиению строки на участки и назначении каждому участку правильных атрибутов (я насчитал 5 штук) с учетом вложения тэгов (например с применением какого-то стекового принципа)
**Attachment:** [Fly_r15987_BBcodes.png Fly_r15988_BBcodes.png](http://code.google.com/p/flylinkdc/issues/detail?id=1391)
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=1391_ | priority | плохо работает разбор вложенных bb кодов from on november точно не работает все тэги стиля текста вложенные в тэг цвета не работает цвет вложенный в другой цвет отображение стилей зависит от порядка применения тэгов болд италик отображает только курсив италик болд показывает жирный курсив кто еще что найдет дополняйте но лечить надо не указанные ошибки по отдельности а весь алгоритм разбора который сводится к разбиению строки на участки и назначении каждому участку правильных атрибутов я насчитал штук с учетом вложения тэгов например с применением какого то стекового принципа attachment original issue | 1 |
611,953 | 18,985,572,246 | IssuesEvent | 2021-11-21 17:01:28 | dehy/foodcoop-mobile-app | https://api.github.com/repos/dehy/foodcoop-mobile-app | closed | Migrer la BDD legacy vers TypeORM | Priority: Medium Status: In Progress Type: Refactoring | La bdd sqlite inventaire (legacy, fait maison) est différente de la bdd sqlite réception de marchandises (TypeORM, managée).
Migrer le schéma et les données existantes vers TypeORM. | 1.0 | Migrer la BDD legacy vers TypeORM - La bdd sqlite inventaire (legacy, fait maison) est différente de la bdd sqlite réception de marchandises (TypeORM, managée).
Migrer le schéma et les données existantes vers TypeORM. | priority | migrer la bdd legacy vers typeorm la bdd sqlite inventaire legacy fait maison est différente de la bdd sqlite réception de marchandises typeorm managée migrer le schéma et les données existantes vers typeorm | 1 |
24,605 | 2,669,425,474 | IssuesEvent | 2015-03-23 15:26:48 | aseprite/aseprite | https://api.github.com/repos/aseprite/aseprite | closed | Move colours in palette editor | colorbar enhancement imported medium priority | _From [[email protected]](https://code.google.com/u/103014587505849798163/) on July 31, 2011 18:08:24_
What do you need to do? Move palette colours in the palette editor to different positions, making it easier to manually sort palettes. How would you like to do it? Using the arrow keys, which are presently not used by the palette editor.
_Original issue: http://code.google.com/p/aseprite/issues/detail?id=37_ | 1.0 | Move colours in palette editor - _From [[email protected]](https://code.google.com/u/103014587505849798163/) on July 31, 2011 18:08:24_
What do you need to do? Move palette colours in the palette editor to different positions, making it easier to manually sort palettes. How would you like to do it? Using the arrow keys, which are presently not used by the palette editor.
_Original issue: http://code.google.com/p/aseprite/issues/detail?id=37_ | priority | move colours in palette editor from on july what do you need to do move palette colours in the palette editor to different positions making it easier to manually sort palettes how would you like to do it using the arrow keys which are presently not used by the palette editor original issue | 1 |
249,085 | 7,953,759,174 | IssuesEvent | 2018-07-12 03:38:18 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | USER ISSUE: 64 bit server in 32 bit game version | Medium Priority | **Version:** 0.7.1.2 beta
**Steps to Reproduce:**
in the 32 bit game version is a 64 bit server so you cant play the game in singelplayer
**Expected behavior:**
**Actual behavior:**
| 1.0 | USER ISSUE: 64 bit server in 32 bit game version - **Version:** 0.7.1.2 beta
**Steps to Reproduce:**
in the 32 bit game version is a 64 bit server so you cant play the game in singelplayer
**Expected behavior:**
**Actual behavior:**
| priority | user issue bit server in bit game version version beta steps to reproduce in the bit game version is a bit server so you cant play the game in singelplayer expected behavior actual behavior | 1 |
75,711 | 3,471,331,702 | IssuesEvent | 2015-12-23 14:47:45 | aic-collections/aicdams-lakeshore | https://api.github.com/repos/aic-collections/aicdams-lakeshore | opened | Move Document Type input field to the top in asset edit view | MEDIUM priority presentation | Move Document Type to the top, before Title. | 1.0 | Move Document Type input field to the top in asset edit view - Move Document Type to the top, before Title. | priority | move document type input field to the top in asset edit view move document type to the top before title | 1 |
782,545 | 27,499,750,739 | IssuesEvent | 2023-03-05 15:05:14 | brandondombrowsky/BastCastle | https://api.github.com/repos/brandondombrowsky/BastCastle | closed | Write a script to demo for Google Integration | priority-medium | As a developer I want to showcase my work so my client knows I am on track.
Can be super duper simple, just any step above "Hey Google turn on my light"; Something more like "Hey Google run 'demo light script'" | 1.0 | Write a script to demo for Google Integration - As a developer I want to showcase my work so my client knows I am on track.
Can be super duper simple, just any step above "Hey Google turn on my light"; Something more like "Hey Google run 'demo light script'" | priority | write a script to demo for google integration as a developer i want to showcase my work so my client knows i am on track can be super duper simple just any step above hey google turn on my light something more like hey google run demo light script | 1 |
186,218 | 6,734,460,856 | IssuesEvent | 2017-10-18 18:07:43 | octobercms/october | https://api.github.com/repos/octobercms/october | closed | Attachments inside a Relation "One to One" don't work | Priority: Medium Status: Abandoned Status: Review Needed Type: Unconfirmed Bug | Hello guys,
i have created two entities and i have connected them with a relation one to one. When i have attached a file in the second entities (connected with the first through a one to one partial) it doesn't store the file.
To be precise the system adds the value in the "system_files" table but it doesn't store the connection.
I thought that was my fault so i have tested the same thing using October test plugin(oc-test-plugin). I have used the Person and Phone Models.
The result is the same.
I hope I was helpful.
| 1.0 | Attachments inside a Relation "One to One" don't work - Hello guys,
i have created two entities and i have connected them with a relation one to one. When i have attached a file in the second entities (connected with the first through a one to one partial) it doesn't store the file.
To be precise the system adds the value in the "system_files" table but it doesn't store the connection.
I thought that was my fault so i have tested the same thing using October test plugin(oc-test-plugin). I have used the Person and Phone Models.
The result is the same.
I hope I was helpful.
| priority | attachments inside a relation one to one don t work hello guys i have created two entities and i have connected them with a relation one to one when i have attached a file in the second entities connected with the first through a one to one partial it doesn t store the file to be precise the system adds the value in the system files table but it doesn t store the connection i thought that was my fault so i have tested the same thing using october test plugin oc test plugin i have used the person and phone models the result is the same i hope i was helpful | 1 |
538,663 | 15,775,083,877 | IssuesEvent | 2021-04-01 02:14:13 | bacuarabrasil/krenak-api | https://api.github.com/repos/bacuarabrasil/krenak-api | closed | US03 - Atualizar usuário | F01 - Cadastro priority:medium | Como usuário, eu quero editar meus dados para disponibilizar as informações corretas e mais recentes para a equipe do app Krenak.
Critérios de aceite:
API:
- [x] Permite edição de nome, sobrenome e data de nascimento
- [x] Deverá permitir edição tanto via API REST quanto pela dashboard de administrador
App:
- [x] Deve conter uma tela de edição de dados
- [x] Deve conter os campos permitidos pela API, todos como opcionais, disponiveis para edição | 1.0 | US03 - Atualizar usuário - Como usuário, eu quero editar meus dados para disponibilizar as informações corretas e mais recentes para a equipe do app Krenak.
Critérios de aceite:
API:
- [x] Permite edição de nome, sobrenome e data de nascimento
- [x] Deverá permitir edição tanto via API REST quanto pela dashboard de administrador
App:
- [x] Deve conter uma tela de edição de dados
- [x] Deve conter os campos permitidos pela API, todos como opcionais, disponiveis para edição | priority | atualizar usuário como usuário eu quero editar meus dados para disponibilizar as informações corretas e mais recentes para a equipe do app krenak critérios de aceite api permite edição de nome sobrenome e data de nascimento deverá permitir edição tanto via api rest quanto pela dashboard de administrador app deve conter uma tela de edição de dados deve conter os campos permitidos pela api todos como opcionais disponiveis para edição | 1 |
410,287 | 11,985,925,735 | IssuesEvent | 2020-04-07 18:22:56 | OpenLiberty/open-liberty | https://api.github.com/repos/OpenLiberty/open-liberty | opened | acmeCA:2-0: Address what triggers an a cert refresh on an update | priority/medium team:Core Security team:Wendigo East | `AcmeProviderImpl.updateAcmeConfigService`
```
/*
* TODO We need to determine which configuration changes will result
* in requiring a certificate to be refreshed. Some that might
* trigger a refresh: validFor, directoryURI, country, locality,
* state, organization, organizationUnit
*
* We can't necessarily just check the certificate, b/c they don't
* always honor them.
*/
```
For #9017 | 1.0 | acmeCA:2-0: Address what triggers an a cert refresh on an update - `AcmeProviderImpl.updateAcmeConfigService`
```
/*
* TODO We need to determine which configuration changes will result
* in requiring a certificate to be refreshed. Some that might
* trigger a refresh: validFor, directoryURI, country, locality,
* state, organization, organizationUnit
*
* We can't necessarily just check the certificate, b/c they don't
* always honor them.
*/
```
For #9017 | priority | acmeca address what triggers an a cert refresh on an update acmeproviderimpl updateacmeconfigservice todo we need to determine which configuration changes will result in requiring a certificate to be refreshed some that might trigger a refresh validfor directoryuri country locality state organization organizationunit we can t necessarily just check the certificate b c they don t always honor them for | 1 |
43,262 | 2,887,050,418 | IssuesEvent | 2015-06-12 12:49:01 | thSoft/elysium | https://api.github.com/repos/thSoft/elysium | closed | Implement full grammar | auto-migrated Priority-Medium Type-Enhancement | ```
Define the semantic model for all features of LilyPond. Among others, this
would enable generating scores in a model-based way.
```
Original issue reported on code.google.com by `harmathdenes` on 3 Aug 2012 at 9:39 | 1.0 | Implement full grammar - ```
Define the semantic model for all features of LilyPond. Among others, this
would enable generating scores in a model-based way.
```
Original issue reported on code.google.com by `harmathdenes` on 3 Aug 2012 at 9:39 | priority | implement full grammar define the semantic model for all features of lilypond among others this would enable generating scores in a model based way original issue reported on code google com by harmathdenes on aug at | 1 |
162,538 | 6,154,982,119 | IssuesEvent | 2017-06-28 13:54:06 | k0shk0sh/FastHub | https://api.github.com/repos/k0shk0sh/FastHub | closed | Won't mark notification as read | Priority: Medium Status: Accepted Status: Completed | **App Version: 3.2.0**
**OS Version: 25**
**Model: LGE-Nexus 5X**
I have the settings to __not__ mark as read notifications when I click on them. That way I can check them out and still have them when I reach a desktop version of GitHub.
The problem is that sometimes I want to mark a notification as read because I don't have to see it when I get to the desktop, so I press the ☑️ to mark it and it still won't get marked.
I'm guessing it happens because I have the settings set to not mark as read.
_Sent from my LGE Nexus 5X using [FastHub](https://play.google.com/store/apps/details?id=com.fastaccess.github)_ | 1.0 | Won't mark notification as read - **App Version: 3.2.0**
**OS Version: 25**
**Model: LGE-Nexus 5X**
I have the settings to __not__ mark as read notifications when I click on them. That way I can check them out and still have them when I reach a desktop version of GitHub.
The problem is that sometimes I want to mark a notification as read because I don't have to see it when I get to the desktop, so I press the ☑️ to mark it and it still won't get marked.
I'm guessing it happens because I have the settings set to not mark as read.
_Sent from my LGE Nexus 5X using [FastHub](https://play.google.com/store/apps/details?id=com.fastaccess.github)_ | priority | won t mark notification as read app version os version model lge nexus i have the settings to not mark as read notifications when i click on them that way i can check them out and still have them when i reach a desktop version of github the problem is that sometimes i want to mark a notification as read because i don t have to see it when i get to the desktop so i press the ☑️ to mark it and it still won t get marked i m guessing it happens because i have the settings set to not mark as read sent from my lge nexus using | 1 |
255,551 | 8,125,417,354 | IssuesEvent | 2018-08-16 20:52:14 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | StreetLight shadows missing | Art Medium Priority | Looks like cast shadows got turned off on them for some reason?
![image](https://user-images.githubusercontent.com/774628/44125527-0bfb2a5c-9fe8-11e8-92a8-f8c9dae4a907.png)
| 1.0 | StreetLight shadows missing - Looks like cast shadows got turned off on them for some reason?
![image](https://user-images.githubusercontent.com/774628/44125527-0bfb2a5c-9fe8-11e8-92a8-f8c9dae4a907.png)
| priority | streetlight shadows missing looks like cast shadows got turned off on them for some reason | 1 |
187,742 | 6,760,834,149 | IssuesEvent | 2017-10-24 22:10:11 | b3aver/Automate | https://api.github.com/repos/b3aver/Automate | opened | In viewMode show Actions without the input fields | enhancement priority:minor time:medium topic:ui | Insert span tags with the saved informations.
Substitute also the select fields. | 1.0 | In viewMode show Actions without the input fields - Insert span tags with the saved informations.
Substitute also the select fields. | priority | in viewmode show actions without the input fields insert span tags with the saved informations substitute also the select fields | 1 |
705,008 | 24,218,250,783 | IssuesEvent | 2022-09-26 08:42:10 | wp-media/wp-rocket | https://api.github.com/repos/wp-media/wp-rocket | reopened | RUCSS counter msg isnot displayed without refresh in certain case | type: bug priority: medium effort: [XS] severity: moderate module: remove unused css | **Before submitting an issue please check that you’ve completed the following steps:**
- Made sure you’re on the latest version =>. 3.11.4
- Used the search feature to ensure that the bug hasn’t been reported before
**Describe the bug**
Counter msg for RUCSS is not displayed till refreshing the dashboard after activating RUCSS
**To Reproduce**
Steps to reproduce the behavior:
1. Fresh install to wpr
2. Activate RUCSS and save
3. Counter msg not displayed till refreshing dashboard
**Expected behavior**
Counter msg displayed after saving settings with no need to manually refresh the page
**Screenshots**
If applicable, add screenshots to help explain your problem.
https://jmp.sh/zqxRj0D
**Additional context**
Add any other context about the problem here.
- Same with PHP 7.4.3 and 8.1.7
- The same scenario was working fine on 3.11.3
- Another case when permissions are read-only to cache folder then enable RUCSS, once permissions are back, we need to refresh twice so we can see the RUCSS msg => in this case, if we fixed permissions after RUCSS was enabled by > 90sec, the success msg will be displayed not the counter although nothing is completed yet in used CSS table
- Note: clear used CSS, reactivate RUCSS, change safelist => all displaying msg with no need to refresh
**Backlog Grooming (for WP Media dev team use only)**
- [ ] Reproduce the problem
- [ ] Identify the root cause
- [ ] Scope a solution
- [ ] Estimate the effort
| 1.0 | RUCSS counter msg isnot displayed without refresh in certain case - **Before submitting an issue please check that you’ve completed the following steps:**
- Made sure you’re on the latest version =>. 3.11.4
- Used the search feature to ensure that the bug hasn’t been reported before
**Describe the bug**
Counter msg for RUCSS is not displayed till refreshing the dashboard after activating RUCSS
**To Reproduce**
Steps to reproduce the behavior:
1. Fresh install to wpr
2. Activate RUCSS and save
3. Counter msg not displayed till refreshing dashboard
**Expected behavior**
Counter msg displayed after saving settings with no need to manually refresh the page
**Screenshots**
If applicable, add screenshots to help explain your problem.
https://jmp.sh/zqxRj0D
**Additional context**
Add any other context about the problem here.
- Same with PHP 7.4.3 and 8.1.7
- The same scenario was working fine on 3.11.3
- Another case when permissions are read-only to cache folder then enable RUCSS, once permissions are back, we need to refresh twice so we can see the RUCSS msg => in this case, if we fixed permissions after RUCSS was enabled by > 90sec, the success msg will be displayed not the counter although nothing is completed yet in used CSS table
- Note: clear used CSS, reactivate RUCSS, change safelist => all displaying msg with no need to refresh
**Backlog Grooming (for WP Media dev team use only)**
- [ ] Reproduce the problem
- [ ] Identify the root cause
- [ ] Scope a solution
- [ ] Estimate the effort
| priority | rucss counter msg isnot displayed without refresh in certain case before submitting an issue please check that you’ve completed the following steps made sure you’re on the latest version used the search feature to ensure that the bug hasn’t been reported before describe the bug counter msg for rucss is not displayed till refreshing the dashboard after activating rucss to reproduce steps to reproduce the behavior fresh install to wpr activate rucss and save counter msg not displayed till refreshing dashboard expected behavior counter msg displayed after saving settings with no need to manually refresh the page screenshots if applicable add screenshots to help explain your problem additional context add any other context about the problem here same with php and the same scenario was working fine on another case when permissions are read only to cache folder then enable rucss once permissions are back we need to refresh twice so we can see the rucss msg in this case if we fixed permissions after rucss was enabled by the success msg will be displayed not the counter although nothing is completed yet in used css table note clear used css reactivate rucss change safelist all displaying msg with no need to refresh backlog grooming for wp media dev team use only reproduce the problem identify the root cause scope a solution estimate the effort | 1 |
44,696 | 2,910,632,639 | IssuesEvent | 2015-06-21 23:02:38 | SteamDatabase/steamSummerMinigame | https://api.github.com/repos/SteamDatabase/steamSummerMinigame | closed | Ignore raining gold on trash mobs late-game | 2 - Medium Priority Enhancement | Probably anything past level 1500 (or 10 railgun + 10 mouse button + 20 elemental damage levels) isn't worth switching to the raining gold lane unless Max Elemental Damage is also active. Prioritizing quick group levels by focusing on users' elemental spec will yield far more gold on bosses than anything one could gather from trash lane gold clicks. | 1.0 | Ignore raining gold on trash mobs late-game - Probably anything past level 1500 (or 10 railgun + 10 mouse button + 20 elemental damage levels) isn't worth switching to the raining gold lane unless Max Elemental Damage is also active. Prioritizing quick group levels by focusing on users' elemental spec will yield far more gold on bosses than anything one could gather from trash lane gold clicks. | priority | ignore raining gold on trash mobs late game probably anything past level or railgun mouse button elemental damage levels isn t worth switching to the raining gold lane unless max elemental damage is also active prioritizing quick group levels by focusing on users elemental spec will yield far more gold on bosses than anything one could gather from trash lane gold clicks | 1 |
602,292 | 18,460,287,724 | IssuesEvent | 2021-10-15 23:37:09 | michaelrsweet/pdfio | https://api.github.com/repos/michaelrsweet/pdfio | closed | make all-shared Makefile errors | bug priority-medium | when running `make all-shared`, make fails with several errors
- lines 72,112: `else` is missing trailing backslash
- line 128: `soname` should be `-soname`
- making a shared object requires `-fPIC`
Thanks for the project, it has been extremely useful.
| 1.0 | make all-shared Makefile errors - when running `make all-shared`, make fails with several errors
- lines 72,112: `else` is missing trailing backslash
- line 128: `soname` should be `-soname`
- making a shared object requires `-fPIC`
Thanks for the project, it has been extremely useful.
| priority | make all shared makefile errors when running make all shared make fails with several errors lines else is missing trailing backslash line soname should be soname making a shared object requires fpic thanks for the project it has been extremely useful | 1 |
318,528 | 9,693,884,370 | IssuesEvent | 2019-05-24 17:20:37 | CosmiQ/solaris | https://api.github.com/repos/CosmiQ/solaris | closed | Re-write image stitching using torch/tensorflow | Difficulty: Medium Priority: Medium Type: Maintenance | As it stands now, `sol.raster.image.stitch_images` uses numpy to stitch images together. This means that images have to be moved back from the GPU to the CPU to run, and lose any advantage that GPU processing could potentially provide. Particularly since this is almost always done for post-inference images (i.e. `stitch_images()` is called within `sol.nets.infer.Inferer.__call__()`), the objects being merged are likely to often be torch tensors (or could be converted to torch tensors if they're coming from keras). We should therefore implement GPU-based post-processing using `stitch_images()`:
- [ ] re-write `stitch_images()` to use torch tensor operations instead of numpy arrays
- [ ] enable checking for GPU availability and use GPUs if possible
- [ ] enable check to see if data is in a numpy array when it's read in, and if so, convert it to a torch tensor | 1.0 | Re-write image stitching using torch/tensorflow - As it stands now, `sol.raster.image.stitch_images` uses numpy to stitch images together. This means that images have to be moved back from the GPU to the CPU to run, and lose any advantage that GPU processing could potentially provide. Particularly since this is almost always done for post-inference images (i.e. `stitch_images()` is called within `sol.nets.infer.Inferer.__call__()`), the objects being merged are likely to often be torch tensors (or could be converted to torch tensors if they're coming from keras). We should therefore implement GPU-based post-processing using `stitch_images()`:
- [ ] re-write `stitch_images()` to use torch tensor operations instead of numpy arrays
- [ ] enable checking for GPU availability and use GPUs if possible
- [ ] enable check to see if data is in a numpy array when it's read in, and if so, convert it to a torch tensor | priority | re write image stitching using torch tensorflow as it stands now sol raster image stitch images uses numpy to stitch images together this means that images have to be moved back from the gpu to the cpu to run and lose any advantage that gpu processing could potentially provide particularly since this is almost always done for post inference images i e stitch images is called within sol nets infer inferer call the objects being merged are likely to often be torch tensors or could be converted to torch tensors if they re coming from keras we should therefore implement gpu based post processing using stitch images re write stitch images to use torch tensor operations instead of numpy arrays enable checking for gpu availability and use gpus if possible enable check to see if data is in a numpy array when it s read in and if so convert it to a torch tensor | 1 |
699,059 | 24,002,684,770 | IssuesEvent | 2022-09-14 12:43:55 | status-im/status-desktop | https://api.github.com/repos/status-im/status-desktop | closed | Chat - Pinned messages - Pinned limit reduces after unpinning | bug priority 2: medium E:Bugfixes | # Bug Report
## Description
After a message has been unpinned then the pinned message limit is reduced by 1.
## Steps to reproduce
1. Pin 4 messages (pin limit)
2. 5th pin causes pinned limit reached prompt
3. Unpin 1 message (3 pinned messages)
4. Pin 1 more message - pin limit reached at 3 messages
(pin limit reduces each time)
#### Expected behavior
Pinned message limit remains consistent
#### Actual behavior
![image](https://user-images.githubusercontent.com/50769329/186421353-8e586926-e4e8-4694-8390-6d8ea4f166e3.png)
### Additional Information
- Status desktop version: https://ci.status.im/job/status-desktop/job/platforms/job/macos/lastSuccessfulBuild/artifact/pkg/StatusIm-Desktop-220824-111243-1c7719.dmg
- Operating System: Mac
| 1.0 | Chat - Pinned messages - Pinned limit reduces after unpinning - # Bug Report
## Description
After a message has been unpinned then the pinned message limit is reduced by 1.
## Steps to reproduce
1. Pin 4 messages (pin limit)
2. 5th pin causes pinned limit reached prompt
3. Unpin 1 message (3 pinned messages)
4. Pin 1 more message - pin limit reached at 3 messages
(pin limit reduces each time)
#### Expected behavior
Pinned message limit remains consistent
#### Actual behavior
![image](https://user-images.githubusercontent.com/50769329/186421353-8e586926-e4e8-4694-8390-6d8ea4f166e3.png)
### Additional Information
- Status desktop version: https://ci.status.im/job/status-desktop/job/platforms/job/macos/lastSuccessfulBuild/artifact/pkg/StatusIm-Desktop-220824-111243-1c7719.dmg
- Operating System: Mac
| priority | chat pinned messages pinned limit reduces after unpinning bug report description after a message has been unpinned then the pinned message limit is reduced by steps to reproduce pin messages pin limit pin causes pinned limit reached prompt unpin message pinned messages pin more message pin limit reached at messages pin limit reduces each time expected behavior pinned message limit remains consistent actual behavior additional information status desktop version operating system mac | 1 |
49,602 | 3,003,708,046 | IssuesEvent | 2015-07-25 05:52:08 | jayway/powermock | https://api.github.com/repos/jayway/powermock | opened | PowerMock is of no use for code coverage | bug imported Priority-Medium | _From [[email protected]](https://code.google.com/u/109015946327936505335/) on March 21, 2014 08:12:16_
What steps will reproduce the problem? 1.Write a succesful testcase using PowerMock
2.Run maven command for cobertura report (or Sonar even)
3.Open cobertura report to see if code is covered What is the expected output? What do you see instead? Expected is - the code in the report must be shown as covered. But it is observed that the code remains uncovered. What version of the product are you using? On what operating system? easymock 3.0
powermock-module-junit4 1.4.12
powermock-api-easymock 1.4.12
Windows 7 for Cobertura and Linux for Sonar Please provide any additional information below. When the code coverage remains unchanged it is of no use using powermock. Would be delighted to see the resolution.
_Original issue: http://code.google.com/p/powermock/issues/detail?id=489_ | 1.0 | PowerMock is of no use for code coverage - _From [[email protected]](https://code.google.com/u/109015946327936505335/) on March 21, 2014 08:12:16_
What steps will reproduce the problem? 1.Write a succesful testcase using PowerMock
2.Run maven command for cobertura report (or Sonar even)
3.Open cobertura report to see if code is covered What is the expected output? What do you see instead? Expected is - the code in the report must be shown as covered. But it is observed that the code remains uncovered. What version of the product are you using? On what operating system? easymock 3.0
powermock-module-junit4 1.4.12
powermock-api-easymock 1.4.12
Windows 7 for Cobertura and Linux for Sonar Please provide any additional information below. When the code coverage remains unchanged it is of no use using powermock. Would be delighted to see the resolution.
_Original issue: http://code.google.com/p/powermock/issues/detail?id=489_ | priority | powermock is of no use for code coverage from on march what steps will reproduce the problem write a succesful testcase using powermock run maven command for cobertura report or sonar even open cobertura report to see if code is covered what is the expected output what do you see instead expected is the code in the report must be shown as covered but it is observed that the code remains uncovered what version of the product are you using on what operating system easymock powermock module powermock api easymock windows for cobertura and linux for sonar please provide any additional information below when the code coverage remains unchanged it is of no use using powermock would be delighted to see the resolution original issue | 1 |
56,522 | 3,080,196,197 | IssuesEvent | 2015-08-21 20:38:33 | pavel-pimenov/flylinkdc-r5xx | https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx | closed | Зависание MediaInfo на некоторых файлах. | bug Component-Logic Component-MediaInfo imported Priority-Medium | _From [[email protected]](https://code.google.com/u/117892482479228821242/) on March 20, 2012 10:12:03_
В версии r502 -beta6 build 9479 MediaInfo входит в бесконечный цикл на файле magnet:?xt=urn:tree:tiger:YLAV4JILJOWHXKIZSOOXBRPLIYP64LE5O7BUPMQ&xl=254684390&dn=Probki(divx).avi
p.s: программа не закрывается, приходится убивать через диспетчер.
p.p.s: самомe чинить лень :)
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=720_ | 1.0 | Зависание MediaInfo на некоторых файлах. - _From [[email protected]](https://code.google.com/u/117892482479228821242/) on March 20, 2012 10:12:03_
В версии r502 -beta6 build 9479 MediaInfo входит в бесконечный цикл на файле magnet:?xt=urn:tree:tiger:YLAV4JILJOWHXKIZSOOXBRPLIYP64LE5O7BUPMQ&xl=254684390&dn=Probki(divx).avi
p.s: программа не закрывается, приходится убивать через диспетчер.
p.p.s: самомe чинить лень :)
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=720_ | priority | зависание mediainfo на некоторых файлах from on march в версии build mediainfo входит в бесконечный цикл на файле magnet xt urn tree tiger xl dn probki divx avi p s программа не закрывается приходится убивать через диспетчер p p s самомe чинить лень original issue | 1 |
603,465 | 18,667,391,793 | IssuesEvent | 2021-10-30 03:32:41 | AY2122S1-CS2113T-T12-1/tp | https://api.github.com/repos/AY2122S1-CS2113T-T12-1/tp | closed | Add assertions helper class for model | type.Task priority.Medium | - Can create assertions for expiry to put inside forex and crypto constructors
- Can create assertions for instrumentManager index within bounds | 1.0 | Add assertions helper class for model - - Can create assertions for expiry to put inside forex and crypto constructors
- Can create assertions for instrumentManager index within bounds | priority | add assertions helper class for model can create assertions for expiry to put inside forex and crypto constructors can create assertions for instrumentmanager index within bounds | 1 |
484,883 | 13,958,605,890 | IssuesEvent | 2020-10-24 12:47:26 | AY2021S1-CS2103-T16-3/tp | https://api.github.com/repos/AY2021S1-CS2103-T16-3/tp | closed | As an OHS admin I can archive the current Semester's data | priority.Medium type.Story | ... so that I can keep the data for auditing purposes, but not have it distract me while dealing with the current semester. | 1.0 | As an OHS admin I can archive the current Semester's data - ... so that I can keep the data for auditing purposes, but not have it distract me while dealing with the current semester. | priority | as an ohs admin i can archive the current semester s data so that i can keep the data for auditing purposes but not have it distract me while dealing with the current semester | 1 |
737,738 | 25,529,358,762 | IssuesEvent | 2022-11-29 06:53:33 | rstudio/gt | https://api.github.com/repos/rstudio/gt | closed | Setting Custom Themes for Tables | Difficulty: [2] Intermediate Effort: [3] High Priority: [2] Medium Type: ★ Enhancement | Is it be possible to set a custom `gt` theme like you can do using `ggplot2::theme`? The thought would be to set up consistently used parameters. For me that would be something to the effect of:
```
df %>%
gt() %>%
fmt_currency(
columns = (matches("^salary|^revenue")),
decimals = 0
) %>%
fmt_percent(
columns = starts_with("pct"),
decimals = 0)
```
I also saw this issue, and I'm not sure if the intent was to include this type of request as a part of it:
https://github.com/rstudio/gt/issues/238
| 1.0 | Setting Custom Themes for Tables - Is it be possible to set a custom `gt` theme like you can do using `ggplot2::theme`? The thought would be to set up consistently used parameters. For me that would be something to the effect of:
```
df %>%
gt() %>%
fmt_currency(
columns = (matches("^salary|^revenue")),
decimals = 0
) %>%
fmt_percent(
columns = starts_with("pct"),
decimals = 0)
```
I also saw this issue, and I'm not sure if the intent was to include this type of request as a part of it:
https://github.com/rstudio/gt/issues/238
| priority | setting custom themes for tables is it be possible to set a custom gt theme like you can do using theme the thought would be to set up consistently used parameters for me that would be something to the effect of df gt fmt currency columns matches salary revenue decimals fmt percent columns starts with pct decimals i also saw this issue and i m not sure if the intent was to include this type of request as a part of it | 1 |
216,722 | 7,311,193,020 | IssuesEvent | 2018-02-28 17:01:19 | craftercms/craftercms | https://api.github.com/repos/craftercms/craftercms | closed | [craftercms] Migrate.sh copies configured-lists to the wrong location | bug priority: medium | migrate.sh is copying configured-lists to form-control-config/configured-lists/configured-lists. | 1.0 | [craftercms] Migrate.sh copies configured-lists to the wrong location - migrate.sh is copying configured-lists to form-control-config/configured-lists/configured-lists. | priority | migrate sh copies configured lists to the wrong location migrate sh is copying configured lists to form control config configured lists configured lists | 1 |
623,187 | 19,662,832,717 | IssuesEvent | 2022-01-10 18:52:55 | nanaynaomi/multnomah-falls-research | https://api.github.com/repos/nanaynaomi/multnomah-falls-research | opened | Create new logo - check with Ingrid | medium priority low difficulty content | 1/10/2022
Create some various logo designs to replace the other logo and/or change the coloring of the old logo. Confirm with Ingrid before starting this. Once logo designs are created, show them to Ingrid to hear her thoughts on them and how we could change them to work with the site. | 1.0 | Create new logo - check with Ingrid - 1/10/2022
Create some various logo designs to replace the other logo and/or change the coloring of the old logo. Confirm with Ingrid before starting this. Once logo designs are created, show them to Ingrid to hear her thoughts on them and how we could change them to work with the site. | priority | create new logo check with ingrid create some various logo designs to replace the other logo and or change the coloring of the old logo confirm with ingrid before starting this once logo designs are created show them to ingrid to hear her thoughts on them and how we could change them to work with the site | 1 |
204,644 | 7,089,571,581 | IssuesEvent | 2018-01-12 03:36:52 | dmwm/WMCore | https://api.github.com/repos/dmwm/WMCore | closed | Adapt WMCore OS/Arch requirements to Glidein proposed version | Medium Priority ToDo | See Stefano's talk for exact values. https://indico.cern.ch/event/293016/session/5/contribution/49/material/slides/0.pdf
Policy twiki to come, should be linked here when it does.
I inadvertently filed this in CRABServer and @belforte comments from there are reproduced below.
| 1.0 | Adapt WMCore OS/Arch requirements to Glidein proposed version - See Stefano's talk for exact values. https://indico.cern.ch/event/293016/session/5/contribution/49/material/slides/0.pdf
Policy twiki to come, should be linked here when it does.
I inadvertently filed this in CRABServer and @belforte comments from there are reproduced below.
| priority | adapt wmcore os arch requirements to glidein proposed version see stefano s talk for exact values policy twiki to come should be linked here when it does i inadvertently filed this in crabserver and belforte comments from there are reproduced below | 1 |
155,775 | 5,960,773,221 | IssuesEvent | 2017-05-29 15:01:30 | mkdo/kapow-grunt | https://api.github.com/repos/mkdo/kapow-grunt | opened | New Task: grunt-sass-globbing | Priority: Medium Status: Pending Type: Enhancement | It would be advantageous to introduce globbing so that certain folders in Kapow! Sass would automatically pick up when a new partial was introduced. | 1.0 | New Task: grunt-sass-globbing - It would be advantageous to introduce globbing so that certain folders in Kapow! Sass would automatically pick up when a new partial was introduced. | priority | new task grunt sass globbing it would be advantageous to introduce globbing so that certain folders in kapow sass would automatically pick up when a new partial was introduced | 1 |
698,809 | 23,992,193,561 | IssuesEvent | 2022-09-14 02:55:12 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | reopened | [YSQL] Fix pg_stat_monitor tests | kind/bug area/ysql priority/medium | Jira Link: [DB-770](https://yugabyte.atlassian.net/browse/DB-770)
### Description
The following tests fail when upstream pg_stat_monitor version [0.9.0](https://github.com/percona/pg_stat_monitor/releases/tag/REL0_9_0_STABLE) is integrated with YB.
- [ ] yb_pg_error.sql
`ELECET * FROM unknown; -- syntax error`
When executed the following query `select * from pg_stat_monitor`, we see the following output
```
SELECT query, elevel, sqlcode, message FROM pg_stat_monitor ORDER BY query COLLATE "C",elevel;
query | elevel | sqlcode | message
------------------------------------------------------------------------------------------------+--------+---------+-----------------------------------
insufficient disk/shared space | 20 | 42601 | syntax error at or near "ELECET"
(1 row)
```
Instead of` ELECET * FROM unknown;`, in the query colunm `insufficient disk/shared space` is printed.
This is inconsistent and has been observed in the mac platform.
- [ ] yb_pg_histogram.sql
```
INSERT INTO t1 VALUES(generate_series(1,1000000));
ANALYZE t1;
SELECT count(*) FROM t1;
INSERT INTO t1 VALUES(generate_series(1,1000000));
ANALYZE t1;
SELECT count(*) FROM t1;
```
The following inserts works well. However the number of calls that are displayed for the query,
`SELECT query, calls FROM pg_stat_monitor ORDER BY query COLLATE "C";`
varies every time. Sometimes, there are two entries for the same query for which the sum of
them is equal to the total number of times SELECT has been executed. Hence we comment
these queries.
- [ ] yb_pg_state.sql
`SELECT 1;`
pg_stat_monitor_reset does not remove this from shared memory and hence this might appear
in subsequent pg_stat_monitor tables. Hence we comment this query. We need to verify the
functionality of pg_stat_monitor_reset.
- [ ] yb_pg_tags.sql
` SELECT 1 AS num /* { "application", psql_app, "real_ip", 192.168.1.3) */; `
Comments are not parsed in some platforms, mainly alma8.
- [ ] yb_pg_top_query.sql
Top query has run to run variability in YB.
Sometimes, top queries are not displayed. This happens frequently in alma8 platforms.
- [ ] yb_pg_user.sql
pg_stat_monitor parsing logic has run to run variability for parsing users. | 1.0 | [YSQL] Fix pg_stat_monitor tests - Jira Link: [DB-770](https://yugabyte.atlassian.net/browse/DB-770)
### Description
The following tests fail when upstream pg_stat_monitor version [0.9.0](https://github.com/percona/pg_stat_monitor/releases/tag/REL0_9_0_STABLE) is integrated with YB.
- [ ] yb_pg_error.sql
`ELECET * FROM unknown; -- syntax error`
When executed the following query `select * from pg_stat_monitor`, we see the following output
```
SELECT query, elevel, sqlcode, message FROM pg_stat_monitor ORDER BY query COLLATE "C",elevel;
query | elevel | sqlcode | message
------------------------------------------------------------------------------------------------+--------+---------+-----------------------------------
insufficient disk/shared space | 20 | 42601 | syntax error at or near "ELECET"
(1 row)
```
Instead of` ELECET * FROM unknown;`, in the query colunm `insufficient disk/shared space` is printed.
This is inconsistent and has been observed in the mac platform.
- [ ] yb_pg_histogram.sql
```
INSERT INTO t1 VALUES(generate_series(1,1000000));
ANALYZE t1;
SELECT count(*) FROM t1;
INSERT INTO t1 VALUES(generate_series(1,1000000));
ANALYZE t1;
SELECT count(*) FROM t1;
```
The following inserts works well. However the number of calls that are displayed for the query,
`SELECT query, calls FROM pg_stat_monitor ORDER BY query COLLATE "C";`
varies every time. Sometimes, there are two entries for the same query for which the sum of
them is equal to the total number of times SELECT has been executed. Hence we comment
these queries.
- [ ] yb_pg_state.sql
`SELECT 1;`
pg_stat_monitor_reset does not remove this from shared memory and hence this might appear
in subsequent pg_stat_monitor tables. Hence we comment this query. We need to verify the
functionality of pg_stat_monitor_reset.
- [ ] yb_pg_tags.sql
` SELECT 1 AS num /* { "application", psql_app, "real_ip", 192.168.1.3) */; `
Comments are not parsed in some platforms, mainly alma8.
- [ ] yb_pg_top_query.sql
Top query has run to run variability in YB.
Sometimes, top queries are not displayed. This happens frequently in alma8 platforms.
- [ ] yb_pg_user.sql
pg_stat_monitor parsing logic has run to run variability for parsing users. | priority | fix pg stat monitor tests jira link description the following tests fail when upstream pg stat monitor version is integrated with yb yb pg error sql elecet from unknown syntax error when executed the following query select from pg stat monitor we see the following output select query elevel sqlcode message from pg stat monitor order by query collate c elevel query elevel sqlcode message insufficient disk shared space syntax error at or near elecet row instead of elecet from unknown in the query colunm insufficient disk shared space is printed this is inconsistent and has been observed in the mac platform yb pg histogram sql insert into values generate series analyze select count from insert into values generate series analyze select count from the following inserts works well however the number of calls that are displayed for the query select query calls from pg stat monitor order by query collate c varies every time sometimes there are two entries for the same query for which the sum of them is equal to the total number of times select has been executed hence we comment these queries yb pg state sql select pg stat monitor reset does not remove this from shared memory and hence this might appear in subsequent pg stat monitor tables hence we comment this query we need to verify the functionality of pg stat monitor reset yb pg tags sql select as num application psql app real ip comments are not parsed in some platforms mainly yb pg top query sql top query has run to run variability in yb sometimes top queries are not displayed this happens frequently in platforms yb pg user sql pg stat monitor parsing logic has run to run variability for parsing users | 1 |
620,310 | 19,558,800,692 | IssuesEvent | 2022-01-03 13:32:22 | bounswe/2021SpringGroup1 | https://api.github.com/repos/bounswe/2021SpringGroup1 | closed | Planning on implementing privacy measures | Type: Research Priority: Medium Status: In Progress Platform: Backend | There exists a privacy setting for the current community models. However, we need to change our backend API to implement robust privacy measures. There is a built-in [Permission](https://www.django-rest-framework.org/api-guide/permissions/) module in REST framework that could be useful. We also need to discuss and flesh out requirements based on privacy before implementation. | 1.0 | Planning on implementing privacy measures - There exists a privacy setting for the current community models. However, we need to change our backend API to implement robust privacy measures. There is a built-in [Permission](https://www.django-rest-framework.org/api-guide/permissions/) module in REST framework that could be useful. We also need to discuss and flesh out requirements based on privacy before implementation. | priority | planning on implementing privacy measures there exists a privacy setting for the current community models however we need to change our backend api to implement robust privacy measures there is a built in module in rest framework that could be useful we also need to discuss and flesh out requirements based on privacy before implementation | 1 |
509,016 | 14,710,734,339 | IssuesEvent | 2021-01-05 05:57:30 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | opened | Linking Steam to SLG doesn't work for a user | Category: Accounts Priority: Medium | `Doesnt link slg and steam account. Writes You do not own Eco, while the game was bought on Steam several years ago. I would like to link and change the icon. User ID: slg187377 Steam ID: STEAM_0:0:29674335 (SteamID64: 76561198019614398)` | 1.0 | Linking Steam to SLG doesn't work for a user - `Doesnt link slg and steam account. Writes You do not own Eco, while the game was bought on Steam several years ago. I would like to link and change the icon. User ID: slg187377 Steam ID: STEAM_0:0:29674335 (SteamID64: 76561198019614398)` | priority | linking steam to slg doesn t work for a user doesnt link slg and steam account writes you do not own eco while the game was bought on steam several years ago i would like to link and change the icon user id steam id steam | 1 |
763,212 | 26,747,519,973 | IssuesEvent | 2023-01-30 16:59:54 | MarketSquare/robotframework-browser | https://api.github.com/repos/MarketSquare/robotframework-browser | closed | [Bug] Get code completion for LibraryComponent.library does not work correctly | bug priority: medium | this is just a notice from RoboCon
If I rename__init__.pyi to browser.pyi it works for VSCode. | 1.0 | [Bug] Get code completion for LibraryComponent.library does not work correctly - this is just a notice from RoboCon
If I rename__init__.pyi to browser.pyi it works for VSCode. | priority | get code completion for librarycomponent library does not work correctly this is just a notice from robocon if i rename init pyi to browser pyi it works for vscode | 1 |
729,326 | 25,120,492,874 | IssuesEvent | 2022-11-09 07:40:37 | stratosphererl/stratosphere | https://api.github.com/repos/stratosphererl/stratosphere | opened | As a website user, I want to have a neatly organized menu, so that I can easily navigate the website. | type: user story priority: medium work: complicated [2] work: obvious [1] area: frontend | # Acceptance Criteria
- [ ] Given the website has a navigation bar
- [ ] When the user selects an item on the nav bar
- [ ] Then the user is redirected elsewhere
# Estimation of Work
- Implementation: 3-4 hrs (1-2 days)
- Design: 2 hrs (1 day)
_Note: Implementation and Design may or may not happen simultaneously_
# Tasks
- [ ] What needs to get done?
# Notes
The purpose of this issue is to begin a common theme or design when further developing the website.
This is not a very strict issue, it is meant to be rather open-ended and possibly changed to be more specific later. | 1.0 | As a website user, I want to have a neatly organized menu, so that I can easily navigate the website. - # Acceptance Criteria
- [ ] Given the website has a navigation bar
- [ ] When the user selects an item on the nav bar
- [ ] Then the user is redirected elsewhere
# Estimation of Work
- Implementation: 3-4 hrs (1-2 days)
- Design: 2 hrs (1 day)
_Note: Implementation and Design may or may not happen simultaneously_
# Tasks
- [ ] What needs to get done?
# Notes
The purpose of this issue is to begin a common theme or design when further developing the website.
This is not a very strict issue, it is meant to be rather open-ended and possibly changed to be more specific later. | priority | as a website user i want to have a neatly organized menu so that i can easily navigate the website acceptance criteria given the website has a navigation bar when the user selects an item on the nav bar then the user is redirected elsewhere estimation of work implementation hrs days design hrs day note implementation and design may or may not happen simultaneously tasks what needs to get done notes the purpose of this issue is to begin a common theme or design when further developing the website this is not a very strict issue it is meant to be rather open ended and possibly changed to be more specific later | 1 |
805,177 | 29,510,312,236 | IssuesEvent | 2023-06-03 21:07:20 | CAMaji/oxygen-cs-grp2-eq10 | https://api.github.com/repos/CAMaji/oxygen-cs-grp2-eq10 | opened | [FEATURE] - Git Hook - Linting | feature medium priority | ## Description de la fonctionnalité
Définir les étapes de linting.
## Niveau de priorité (critique, important, ou utile)
Niveau important
## Exigences et contraintes (optionnel)
## Détails techniques (optionnel)
| 1.0 | [FEATURE] - Git Hook - Linting - ## Description de la fonctionnalité
Définir les étapes de linting.
## Niveau de priorité (critique, important, ou utile)
Niveau important
## Exigences et contraintes (optionnel)
## Détails techniques (optionnel)
| priority | git hook linting description de la fonctionnalité définir les étapes de linting niveau de priorité critique important ou utile niveau important exigences et contraintes optionnel détails techniques optionnel | 1 |
121,263 | 4,807,140,534 | IssuesEvent | 2016-11-02 20:34:46 | PovertyAction/high-frequency-checks | https://api.github.com/repos/PovertyAction/high-frequency-checks | opened | Program: import labels from survey form and assign them to variables | enhancement Medium Priority New Program Ideas | Create a program that assigns a number of characteristics to each variable, including its value label.
This is helpful for saving value labels to select_multiple questions so we can later reference them and output them for the templates.
This is done in odkmeta but would be easy and helpful in a program so users of the SCTO import do file could make use of it. | 1.0 | Program: import labels from survey form and assign them to variables - Create a program that assigns a number of characteristics to each variable, including its value label.
This is helpful for saving value labels to select_multiple questions so we can later reference them and output them for the templates.
This is done in odkmeta but would be easy and helpful in a program so users of the SCTO import do file could make use of it. | priority | program import labels from survey form and assign them to variables create a program that assigns a number of characteristics to each variable including its value label this is helpful for saving value labels to select multiple questions so we can later reference them and output them for the templates this is done in odkmeta but would be easy and helpful in a program so users of the scto import do file could make use of it | 1 |
520,711 | 15,091,588,148 | IssuesEvent | 2021-02-06 16:06:06 | space-wizards/space-station-14 | https://api.github.com/repos/space-wizards/space-station-14 | opened | Puddles slip regardless of reagents contained. | Difficulty: 3 - Intermediate Priority: 2-medium Size: 3 - Medium | Spill sugar on the floor? Whoops, get slipped.
Ideally, we should be able to define puddle behavior on reagents themselves.
This is most likely gonna require YAML `!type` knowledge, knowing how the chemistry/puddles system works, etc. | 1.0 | Puddles slip regardless of reagents contained. - Spill sugar on the floor? Whoops, get slipped.
Ideally, we should be able to define puddle behavior on reagents themselves.
This is most likely gonna require YAML `!type` knowledge, knowing how the chemistry/puddles system works, etc. | priority | puddles slip regardless of reagents contained spill sugar on the floor whoops get slipped ideally we should be able to define puddle behavior on reagents themselves this is most likely gonna require yaml type knowledge knowing how the chemistry puddles system works etc | 1 |
213,933 | 7,261,595,281 | IssuesEvent | 2018-02-18 22:16:26 | RobotLocomotion/drake | https://api.github.com/repos/RobotLocomotion/drake | opened | python bindings for symbolic Monomial, MonomialBasis, and Polynomial | priority: medium team: software core type: feature request | I took a slapdash pass at this trying to enable one particular optimization problem in python:
https://github.com/RussTedrake/drake/commit/1cef69a8df30324e2b2b9f2faa9847ce7e0ac06e
but it really needs to be done correctly and more carefully. There are not that many operators to bind, really, but doing it properly matters. @soonho-tri -- might you be willing? | 1.0 | python bindings for symbolic Monomial, MonomialBasis, and Polynomial - I took a slapdash pass at this trying to enable one particular optimization problem in python:
https://github.com/RussTedrake/drake/commit/1cef69a8df30324e2b2b9f2faa9847ce7e0ac06e
but it really needs to be done correctly and more carefully. There are not that many operators to bind, really, but doing it properly matters. @soonho-tri -- might you be willing? | priority | python bindings for symbolic monomial monomialbasis and polynomial i took a slapdash pass at this trying to enable one particular optimization problem in python but it really needs to be done correctly and more carefully there are not that many operators to bind really but doing it properly matters soonho tri might you be willing | 1 |
550,705 | 16,130,678,684 | IssuesEvent | 2021-04-29 03:55:03 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | ehl_crb: tests/kernel/context tests failed | bug priority: medium | **Describe the bug**
kernel_cpu_idle_atomic and kernel_cpu_idle tests failed
**To Reproduce**
Steps to reproduce the behavior:
1. twister -W -p ehl_crb --device-testing --device-serial /dev/ttyUSB0 -T tests/kernel/context/ --west flash="/home/user/EHL_X86_PXE.sh"
2. Test results
**Expected behavior**
Tests should pass
**Logs and console output**
```
*** Booting Zephyr OS build zephyr-v2.5.99-ww13.4 ***
Running test suite context
===================================================================
START - test_kernel_interrupts
PASS - test_kernel_interrupts in 0.1 seconds
===================================================================
START - test_kernel_ctx_thread
Testing k_current_get() from an ISR and thread
Testing k_is_in_isr() from an ISR
Testing k_is_in_isr() from a preemptible thread
PASS - test_kernel_ctx_thread in 0.13 seconds
===================================================================
START - test_busy_wait
Thread busy waiting for 20000 usecs
Thread busy waiting completed
Thread busy waiting for 20000 usecs (irqs locked)
Thread busy waiting completed (irqs locked)
PASS - test_busy_wait in 0.56 seconds
===================================================================
START - test_k_sleep
thread sleeping for 50 milliseconds
thread back from sleep
Testing k_thread_create() without cancellation
thread (q order: 2, t/o: 500) is running
got thread (q order: 2, t/o: 500) as expected
thread (q order: 3, t/o: 750) is running
got thread (q order: 3, t/o: 750) as expected
thread (q order: 0, t/o: 1000) is running
got thread (q order: 0, t/o: 1000) as expected
thread (q order: 6, t/o: 1250) is running
got thread (q order: 6, t/o: 1250) as expected
thread (q order: 1, t/o: 1500) is running
got thread (q order: 1, t/o: 1500) as expected
thread (q order: 4, t/o: 1750) is running
got thread (q order: 4, t/o: 1750) as expected
thread (q order: 5, t/o: 2000) is running
got thread (q order: 5, t/o: 2000) as expected
Testing k_thread_create() with cancellations
cancelling [q order: 0, t/o: 1000, t/o order: 0]
thread (q order: 3, t/o: 750) is running
got (q order: 3, t/o: 750, t/o order 1) as expected
thread (q order: 0, t/o: 1000) is running
got (q order: 0, t/o: 1000, t/o order 2) as expected
cancelling [q order: 3, t/o: 750, t/o order: 3]
cancelling [q order: 4, t/o: 1750, t/o order: 4]
thread (q order: 4, t/o: 1750) is running
got (q order: 4, t/o: 1750, t/o order 5) as expected
cancelling [q order: 6, t/o: 1250, t/o order: 6]
PASS - test_k_sleep in 5.340 seconds
===================================================================
START - test_kernel_cpu_idle_atomic
Assertion failed at WEST_TOPDIR/zephyr/tests/kernel/context/src/main.c:350: _test_kernel_cpu_idle: (tms2 < tms is true)
Bad ms value computed,got 5482 which is less than 5483
FAIL - test_kernel_cpu_idle_atomic in 0.20 seconds
===================================================================
START - test_kernel_cpu_idle
Assertion failed at WEST_TOPDIR/zephyr/tests/kernel/context/src/main.c:350: _test_kernel_cpu_idle: (tms2 < tms is true)
Bad ms value computed,got 5516 which is less than 5517
FAIL - test_kernel_cpu_idle in 0.20 seconds
===================================================================
START - test_k_yield
PASS - test_k_yield in 0.1 seconds
===================================================================
START - test_kernel_thread
PASS - test_kernel_thread in 0.1 seconds
===================================================================
START - test_kernel_timer_interrupts
PASS - test_kernel_timer_interrupts in 0.4 seconds
===================================================================
Test suite context failed.
===================================================================
PROJECT EXECUTION FAILED
```
**Environment (please complete the following information):**
- OS: Fedora33
- Toolchain: zephyr-sdk-0.12.3
- Commit ID: ecac194448da
| 1.0 | ehl_crb: tests/kernel/context tests failed - **Describe the bug**
kernel_cpu_idle_atomic and kernel_cpu_idle tests failed
**To Reproduce**
Steps to reproduce the behavior:
1. twister -W -p ehl_crb --device-testing --device-serial /dev/ttyUSB0 -T tests/kernel/context/ --west flash="/home/user/EHL_X86_PXE.sh"
2. Test results
**Expected behavior**
Tests should pass
**Logs and console output**
```
*** Booting Zephyr OS build zephyr-v2.5.99-ww13.4 ***
Running test suite context
===================================================================
START - test_kernel_interrupts
PASS - test_kernel_interrupts in 0.1 seconds
===================================================================
START - test_kernel_ctx_thread
Testing k_current_get() from an ISR and thread
Testing k_is_in_isr() from an ISR
Testing k_is_in_isr() from a preemptible thread
PASS - test_kernel_ctx_thread in 0.13 seconds
===================================================================
START - test_busy_wait
Thread busy waiting for 20000 usecs
Thread busy waiting completed
Thread busy waiting for 20000 usecs (irqs locked)
Thread busy waiting completed (irqs locked)
PASS - test_busy_wait in 0.56 seconds
===================================================================
START - test_k_sleep
thread sleeping for 50 milliseconds
thread back from sleep
Testing k_thread_create() without cancellation
thread (q order: 2, t/o: 500) is running
got thread (q order: 2, t/o: 500) as expected
thread (q order: 3, t/o: 750) is running
got thread (q order: 3, t/o: 750) as expected
thread (q order: 0, t/o: 1000) is running
got thread (q order: 0, t/o: 1000) as expected
thread (q order: 6, t/o: 1250) is running
got thread (q order: 6, t/o: 1250) as expected
thread (q order: 1, t/o: 1500) is running
got thread (q order: 1, t/o: 1500) as expected
thread (q order: 4, t/o: 1750) is running
got thread (q order: 4, t/o: 1750) as expected
thread (q order: 5, t/o: 2000) is running
got thread (q order: 5, t/o: 2000) as expected
Testing k_thread_create() with cancellations
cancelling [q order: 0, t/o: 1000, t/o order: 0]
thread (q order: 3, t/o: 750) is running
got (q order: 3, t/o: 750, t/o order 1) as expected
thread (q order: 0, t/o: 1000) is running
got (q order: 0, t/o: 1000, t/o order 2) as expected
cancelling [q order: 3, t/o: 750, t/o order: 3]
cancelling [q order: 4, t/o: 1750, t/o order: 4]
thread (q order: 4, t/o: 1750) is running
got (q order: 4, t/o: 1750, t/o order 5) as expected
cancelling [q order: 6, t/o: 1250, t/o order: 6]
PASS - test_k_sleep in 5.340 seconds
===================================================================
START - test_kernel_cpu_idle_atomic
Assertion failed at WEST_TOPDIR/zephyr/tests/kernel/context/src/main.c:350: _test_kernel_cpu_idle: (tms2 < tms is true)
Bad ms value computed,got 5482 which is less than 5483
FAIL - test_kernel_cpu_idle_atomic in 0.20 seconds
===================================================================
START - test_kernel_cpu_idle
Assertion failed at WEST_TOPDIR/zephyr/tests/kernel/context/src/main.c:350: _test_kernel_cpu_idle: (tms2 < tms is true)
Bad ms value computed,got 5516 which is less than 5517
FAIL - test_kernel_cpu_idle in 0.20 seconds
===================================================================
START - test_k_yield
PASS - test_k_yield in 0.1 seconds
===================================================================
START - test_kernel_thread
PASS - test_kernel_thread in 0.1 seconds
===================================================================
START - test_kernel_timer_interrupts
PASS - test_kernel_timer_interrupts in 0.4 seconds
===================================================================
Test suite context failed.
===================================================================
PROJECT EXECUTION FAILED
```
**Environment (please complete the following information):**
- OS: Fedora33
- Toolchain: zephyr-sdk-0.12.3
- Commit ID: ecac194448da
| priority | ehl crb tests kernel context tests failed describe the bug kernel cpu idle atomic and kernel cpu idle tests failed to reproduce steps to reproduce the behavior twister w p ehl crb device testing device serial dev t tests kernel context west flash home user ehl pxe sh test results expected behavior tests should pass logs and console output booting zephyr os build zephyr running test suite context start test kernel interrupts pass test kernel interrupts in seconds start test kernel ctx thread testing k current get from an isr and thread testing k is in isr from an isr testing k is in isr from a preemptible thread pass test kernel ctx thread in seconds start test busy wait thread busy waiting for usecs thread busy waiting completed thread busy waiting for usecs irqs locked thread busy waiting completed irqs locked pass test busy wait in seconds start test k sleep thread sleeping for milliseconds thread back from sleep testing k thread create without cancellation thread q order t o is running got thread q order t o as expected thread q order t o is running got thread q order t o as expected thread q order t o is running got thread q order t o as expected thread q order t o is running got thread q order t o as expected thread q order t o is running got thread q order t o as expected thread q order t o is running got thread q order t o as expected thread q order t o is running got thread q order t o as expected testing k thread create with cancellations cancelling thread q order t o is running got q order t o t o order as expected thread q order t o is running got q order t o t o order as expected cancelling cancelling thread q order t o is running got q order t o t o order as expected cancelling pass test k sleep in seconds start test kernel cpu idle atomic assertion failed at west topdir zephyr tests kernel context src main c test kernel cpu idle tms is true bad ms value computed got which is less than fail test kernel cpu idle atomic in seconds start test kernel cpu idle assertion failed at west topdir zephyr tests kernel context src main c test kernel cpu idle tms is true bad ms value computed got which is less than fail test kernel cpu idle in seconds start test k yield pass test k yield in seconds start test kernel thread pass test kernel thread in seconds start test kernel timer interrupts pass test kernel timer interrupts in seconds test suite context failed project execution failed environment please complete the following information os toolchain zephyr sdk commit id | 1 |
758,686 | 26,564,886,447 | IssuesEvent | 2023-01-20 19:10:26 | tm21cy/Bit | https://api.github.com/repos/tm21cy/Bit | closed | Suggestion: implement SQLite buffer file for query and rate optimizations | enhancement good first issue medium priority semver: minor | **Is your feature request related to a problem? Please describe.**
Planetscale limits our monthly queries, and any networking takes time. As well as this, we then rely on a network based backup.
**Describe the solution you'd like**
Approve and add a buffer file in SQLite for caching data - have two Sequelize entities, one configured for MySQL (raw queries and updates) and one for SQLite (take returned info and backup).
**Describe alternatives you've considered**
None suitable.
**Additional context**
None.
| 1.0 | Suggestion: implement SQLite buffer file for query and rate optimizations - **Is your feature request related to a problem? Please describe.**
Planetscale limits our monthly queries, and any networking takes time. As well as this, we then rely on a network based backup.
**Describe the solution you'd like**
Approve and add a buffer file in SQLite for caching data - have two Sequelize entities, one configured for MySQL (raw queries and updates) and one for SQLite (take returned info and backup).
**Describe alternatives you've considered**
None suitable.
**Additional context**
None.
| priority | suggestion implement sqlite buffer file for query and rate optimizations is your feature request related to a problem please describe planetscale limits our monthly queries and any networking takes time as well as this we then rely on a network based backup describe the solution you d like approve and add a buffer file in sqlite for caching data have two sequelize entities one configured for mysql raw queries and updates and one for sqlite take returned info and backup describe alternatives you ve considered none suitable additional context none | 1 |
75,375 | 3,461,844,664 | IssuesEvent | 2015-12-20 12:45:28 | PowerPointLabs/powerpointlabs | https://api.github.com/repos/PowerPointLabs/powerpointlabs | closed | Text highlight is behind the text | Feature.TextHighlight Priority.Medium status.releaseCandidate type-enhancement | Highlight test does not work if the text box has a fill color. Perhaps in such cases the highlight should be in front of the text? | 1.0 | Text highlight is behind the text - Highlight test does not work if the text box has a fill color. Perhaps in such cases the highlight should be in front of the text? | priority | text highlight is behind the text highlight test does not work if the text box has a fill color perhaps in such cases the highlight should be in front of the text | 1 |
315,679 | 9,630,874,214 | IssuesEvent | 2019-05-15 13:06:32 | inverse-inc/packetfence | https://api.github.com/repos/inverse-inc/packetfence | opened | POTD: conflict with local source | Priority: Medium Type: Bug | When you want to use a POTD source with a local source (for example on default connection profile without modification), your POTD user will match the local source before POTD causing following message:
![Screenshot from 2019-05-15 14-53-41](https://user-images.githubusercontent.com/10830260/57777070-41054000-7721-11e9-97bc-14fe808fd935.png)
In this context, local source should be able to detect a POTD user (based on `potd: yes` in DB) and fail (in order to have POTD source matching). | 1.0 | POTD: conflict with local source - When you want to use a POTD source with a local source (for example on default connection profile without modification), your POTD user will match the local source before POTD causing following message:
![Screenshot from 2019-05-15 14-53-41](https://user-images.githubusercontent.com/10830260/57777070-41054000-7721-11e9-97bc-14fe808fd935.png)
In this context, local source should be able to detect a POTD user (based on `potd: yes` in DB) and fail (in order to have POTD source matching). | priority | potd conflict with local source when you want to use a potd source with a local source for example on default connection profile without modification your potd user will match the local source before potd causing following message in this context local source should be able to detect a potd user based on potd yes in db and fail in order to have potd source matching | 1 |
725,035 | 24,949,564,996 | IssuesEvent | 2022-11-01 05:23:42 | AY2223S1-CS2103T-F11-2/tp | https://api.github.com/repos/AY2223S1-CS2103T-F11-2/tp | closed | [PE-D][Tester E] Checkbox is not clickable | priority.Medium response.NotInScope | The checkbox looks enabled and invites the user to click it. However, clicking it does not do anything.
Your team could consider making the name of the task greyed out and struck through to indicate that it is complete.
![image.png](https://raw.githubusercontent.com/yeojunjie/ped/main/files/8b58be8b-67ca-4a06-94b7-dcc704928e8c.png)
<!--session: 1666946199979-5e9e6023-8d23-4a76-8607-fdc6104e6273-->
<!--Version: Web v3.4.4-->
-------------
Labels: `severity.Medium` `type.FeatureFlaw`
original: yeojunjie/ped#3 | 1.0 | [PE-D][Tester E] Checkbox is not clickable - The checkbox looks enabled and invites the user to click it. However, clicking it does not do anything.
Your team could consider making the name of the task greyed out and struck through to indicate that it is complete.
![image.png](https://raw.githubusercontent.com/yeojunjie/ped/main/files/8b58be8b-67ca-4a06-94b7-dcc704928e8c.png)
<!--session: 1666946199979-5e9e6023-8d23-4a76-8607-fdc6104e6273-->
<!--Version: Web v3.4.4-->
-------------
Labels: `severity.Medium` `type.FeatureFlaw`
original: yeojunjie/ped#3 | priority | checkbox is not clickable the checkbox looks enabled and invites the user to click it however clicking it does not do anything your team could consider making the name of the task greyed out and struck through to indicate that it is complete labels severity medium type featureflaw original yeojunjie ped | 1 |
605,559 | 18,736,898,231 | IssuesEvent | 2021-11-04 08:53:50 | beattosetto/beattosetto | https://api.github.com/repos/beattosetto/beattosetto | closed | The view still create `BeatmapEntry` despite no beatmap found in API | area:api area:beatmap priority:medium type:bug | To reproduce:
1. Add beatmap ID that not have in osu!
2. The `BeatmapEntry` created with every field except user blank.
This regressed from the add beatmap view that's still make `BeatmapEntry` despite the result from API is blank. | 1.0 | The view still create `BeatmapEntry` despite no beatmap found in API - To reproduce:
1. Add beatmap ID that not have in osu!
2. The `BeatmapEntry` created with every field except user blank.
This regressed from the add beatmap view that's still make `BeatmapEntry` despite the result from API is blank. | priority | the view still create beatmapentry despite no beatmap found in api to reproduce add beatmap id that not have in osu the beatmapentry created with every field except user blank this regressed from the add beatmap view that s still make beatmapentry despite the result from api is blank | 1 |
134,220 | 5,222,288,178 | IssuesEvent | 2017-01-27 07:24:03 | Sonarr/Sonarr | https://api.github.com/repos/Sonarr/Sonarr | closed | Downloading status for episode with an existing file | priority:medium suboptimal ui-only | When you queue a new download for an episode with an existing file and you have an existing file the status doesn't change to show its downloading. Either we need to fully replace the status or show both at the same time.
| 1.0 | Downloading status for episode with an existing file - When you queue a new download for an episode with an existing file and you have an existing file the status doesn't change to show its downloading. Either we need to fully replace the status or show both at the same time.
| priority | downloading status for episode with an existing file when you queue a new download for an episode with an existing file and you have an existing file the status doesn t change to show its downloading either we need to fully replace the status or show both at the same time | 1 |
330,312 | 10,038,444,288 | IssuesEvent | 2019-07-18 15:09:50 | zephyrproject-rtos/zephyr | https://api.github.com/repos/zephyrproject-rtos/zephyr | closed | code relocation generating different memory layout cause user mode not working | area: Linker Scripts area: Memory Protection area: Userspace bug priority: medium | **Describe the bug**
In the magic building process for user mode, there are different memory layout as below log show.
The map file of building zephyr_prebuilt.elf is:
```
.text.sedi_pm_pmu2nvic_isr
0x0000000060011188 0xa libbsp_sedi_arm.a(pm.c.obj)
0x0000000060011188 sedi_pm_pmu2nvic_isr
*(SORT_BY_ALIGNMENT(.gnu.linkonce.t.*))
*(SORT_BY_ALIGNMENT(.glue_7t))
.glue_7t 0x0000000060011192 0x0 linker stubs
*(SORT_BY_ALIGNMENT(.glue_7))
.glue_7 0x0000000060011192 0x0 linker stubs
*(SORT_BY_ALIGNMENT(.vfp11_veneer))
.vfp11_veneer 0x0000000060011192 0x0 linker stubs
*(SORT_BY_ALIGNMENT(.v4_bx))
.v4_bx 0x0000000060011192 0x0 linker stubs
0x0000000060011192 _priv_stacks_text_area_start = .
*(SORT_BY_ALIGNMENT(.priv_stacks.text*))
*fill* 0x0000000060011192 0x2
```
And ztest_thread_stack located in 0x60030000 from below log and we can also find it from map file.
`gen_kobject_list.py: symbol 'ztest_thread_stack' at 0x60030000 contains 1 object(s)`
The map file of building priv_stacks_prebuilt.elf is:
```
.text.sedi_pm_pmu2nvic_isr
0x0000000060011188 0xa libbsp_sedi_arm.a(pm.c.obj)
0x0000000060011188 sedi_pm_pmu2nvic_isr
*fill* 0x0000000060011192 0x6
.text.sedi_pm_pmu2nvic_isr.__stub
0x0000000060011198 0x68 linker stubs
*(SORT_BY_ALIGNMENT(.gnu.linkonce.t.*))
*(SORT_BY_ALIGNMENT(.glue_7t))
.glue_7t 0x0000000060011200 0x0 linker stubs
*(SORT_BY_ALIGNMENT(.glue_7))
.glue_7 0x0000000060011200 0x0 linker stubs
*(SORT_BY_ALIGNMENT(.vfp11_veneer))
.vfp11_veneer 0x0000000060011200 0x0 linker stubs
*(SORT_BY_ALIGNMENT(.v4_bx))
.v4_bx 0x0000000060011200 0x0 linker stubs
0x0000000060011200 _priv_stacks_text_area_start = .
```
And ztest_thread_stack located in 0x60050000 from below log and we can also find it from map file.
`gen_priv_stacks.py: symbol 'ztest_thread_stack' at 0x60050000 contains 1 object(s)`
So that difference will cause the wrong hash table in priv_stacks_hash.c because of the different .rodata .noinit .data sections in the same one sram region and it will cause the user mode doesn't work.
| 1.0 | code relocation generating different memory layout cause user mode not working - **Describe the bug**
In the magic building process for user mode, there are different memory layout as below log show.
The map file of building zephyr_prebuilt.elf is:
```
.text.sedi_pm_pmu2nvic_isr
0x0000000060011188 0xa libbsp_sedi_arm.a(pm.c.obj)
0x0000000060011188 sedi_pm_pmu2nvic_isr
*(SORT_BY_ALIGNMENT(.gnu.linkonce.t.*))
*(SORT_BY_ALIGNMENT(.glue_7t))
.glue_7t 0x0000000060011192 0x0 linker stubs
*(SORT_BY_ALIGNMENT(.glue_7))
.glue_7 0x0000000060011192 0x0 linker stubs
*(SORT_BY_ALIGNMENT(.vfp11_veneer))
.vfp11_veneer 0x0000000060011192 0x0 linker stubs
*(SORT_BY_ALIGNMENT(.v4_bx))
.v4_bx 0x0000000060011192 0x0 linker stubs
0x0000000060011192 _priv_stacks_text_area_start = .
*(SORT_BY_ALIGNMENT(.priv_stacks.text*))
*fill* 0x0000000060011192 0x2
```
And ztest_thread_stack located in 0x60030000 from below log and we can also find it from map file.
`gen_kobject_list.py: symbol 'ztest_thread_stack' at 0x60030000 contains 1 object(s)`
The map file of building priv_stacks_prebuilt.elf is:
```
.text.sedi_pm_pmu2nvic_isr
0x0000000060011188 0xa libbsp_sedi_arm.a(pm.c.obj)
0x0000000060011188 sedi_pm_pmu2nvic_isr
*fill* 0x0000000060011192 0x6
.text.sedi_pm_pmu2nvic_isr.__stub
0x0000000060011198 0x68 linker stubs
*(SORT_BY_ALIGNMENT(.gnu.linkonce.t.*))
*(SORT_BY_ALIGNMENT(.glue_7t))
.glue_7t 0x0000000060011200 0x0 linker stubs
*(SORT_BY_ALIGNMENT(.glue_7))
.glue_7 0x0000000060011200 0x0 linker stubs
*(SORT_BY_ALIGNMENT(.vfp11_veneer))
.vfp11_veneer 0x0000000060011200 0x0 linker stubs
*(SORT_BY_ALIGNMENT(.v4_bx))
.v4_bx 0x0000000060011200 0x0 linker stubs
0x0000000060011200 _priv_stacks_text_area_start = .
```
And ztest_thread_stack located in 0x60050000 from below log and we can also find it from map file.
`gen_priv_stacks.py: symbol 'ztest_thread_stack' at 0x60050000 contains 1 object(s)`
So that difference will cause the wrong hash table in priv_stacks_hash.c because of the different .rodata .noinit .data sections in the same one sram region and it will cause the user mode doesn't work.
| priority | code relocation generating different memory layout cause user mode not working describe the bug in the magic building process for user mode there are different memory layout as below log show the map file of building zephyr prebuilt elf is text sedi pm isr libbsp sedi arm a pm c obj sedi pm isr sort by alignment gnu linkonce t sort by alignment glue glue linker stubs sort by alignment glue glue linker stubs sort by alignment veneer veneer linker stubs sort by alignment bx bx linker stubs priv stacks text area start sort by alignment priv stacks text fill and ztest thread stack located in from below log and we can also find it from map file gen kobject list py symbol ztest thread stack at contains object s the map file of building priv stacks prebuilt elf is text sedi pm isr libbsp sedi arm a pm c obj sedi pm isr fill text sedi pm isr stub linker stubs sort by alignment gnu linkonce t sort by alignment glue glue linker stubs sort by alignment glue glue linker stubs sort by alignment veneer veneer linker stubs sort by alignment bx bx linker stubs priv stacks text area start and ztest thread stack located in from below log and we can also find it from map file gen priv stacks py symbol ztest thread stack at contains object s so that difference will cause the wrong hash table in priv stacks hash c because of the different rodata noinit data sections in the same one sram region and it will cause the user mode doesn t work | 1 |
705,358 | 24,232,336,071 | IssuesEvent | 2022-09-26 19:28:15 | yugabyte/yugabyte-db | https://api.github.com/repos/yugabyte/yugabyte-db | closed | [Docs] Add documentation for pg_stat_progress_copy | kind/enhancement area/ysql priority/medium | Jira Link: [DB-3523](https://yugabyte.atlassian.net/browse/DB-3523)
### Description
We added [pg_stat_progress_copy](https://www.postgresql.org/docs/current/progress-reporting.html#COPY-PROGRESS-REPORTING) view in the YB. Mostly it is same as PostgreSQL with the following changes
- tuples_processed are the tuples which are already persisted in a transaction during the copy command.
- Retains copy command information in the view after the copy has finished. Though for specific session only the issued copy command status is retained.
- Add a status column to indicate the status of the copy command.
| 1.0 | [Docs] Add documentation for pg_stat_progress_copy - Jira Link: [DB-3523](https://yugabyte.atlassian.net/browse/DB-3523)
### Description
We added [pg_stat_progress_copy](https://www.postgresql.org/docs/current/progress-reporting.html#COPY-PROGRESS-REPORTING) view in the YB. Mostly it is same as PostgreSQL with the following changes
- tuples_processed are the tuples which are already persisted in a transaction during the copy command.
- Retains copy command information in the view after the copy has finished. Though for specific session only the issued copy command status is retained.
- Add a status column to indicate the status of the copy command.
| priority | add documentation for pg stat progress copy jira link description we added view in the yb mostly it is same as postgresql with the following changes tuples processed are the tuples which are already persisted in a transaction during the copy command retains copy command information in the view after the copy has finished though for specific session only the issued copy command status is retained add a status column to indicate the status of the copy command | 1 |
678,817 | 23,211,730,798 | IssuesEvent | 2022-08-02 10:44:29 | Zokrates/ZoKrates | https://api.github.com/repos/Zokrates/ZoKrates | closed | constant propagation for imports fails | Type: Bug Type: Enhancement Priority: Medium | > ecc/edwardsAdd.code
```
def main(field[2] pt1, field[2] pt2, field[10] context) -> (field[2]):
field a = context[0]
field d = context[1]
field u1 = pt1[0]
field v1 = pt1[1]
field u2 = pt2[0]
field v2 = pt2[1]
field uOut = (u1*v2 + v1*u2) / (1 + d*u1*u2*v1*v2)
field vOut = (v1*v2 - a*u1*u2) / (1 - d*u1*u2*v1*v2)
return [uOut, vOut]
```
and a version wrapped with defined `context`:
```
import "ecc/edwardsAdd.code" as add
import "ecc/babyjubjubParams.code" as context
def main(field[2] pt1, field[2] pt2) -> (field[2]):
context = context()
field[2] out = add(pt1, pt2, context)
return out
```
results in equal number of constraints:
> Number of constraints: 23
The second version should have less constraints as `context` only holds constants and as a result we should several multiplications for free | 1.0 | constant propagation for imports fails - > ecc/edwardsAdd.code
```
def main(field[2] pt1, field[2] pt2, field[10] context) -> (field[2]):
field a = context[0]
field d = context[1]
field u1 = pt1[0]
field v1 = pt1[1]
field u2 = pt2[0]
field v2 = pt2[1]
field uOut = (u1*v2 + v1*u2) / (1 + d*u1*u2*v1*v2)
field vOut = (v1*v2 - a*u1*u2) / (1 - d*u1*u2*v1*v2)
return [uOut, vOut]
```
and a version wrapped with defined `context`:
```
import "ecc/edwardsAdd.code" as add
import "ecc/babyjubjubParams.code" as context
def main(field[2] pt1, field[2] pt2) -> (field[2]):
context = context()
field[2] out = add(pt1, pt2, context)
return out
```
results in equal number of constraints:
> Number of constraints: 23
The second version should have less constraints as `context` only holds constants and as a result we should several multiplications for free | priority | constant propagation for imports fails ecc edwardsadd code def main field field field context field field a context field d context field field field field field uout d field vout a d return and a version wrapped with defined context import ecc edwardsadd code as add import ecc babyjubjubparams code as context def main field field field context context field out add context return out results in equal number of constraints number of constraints the second version should have less constraints as context only holds constants and as a result we should several multiplications for free | 1 |
411,632 | 12,026,896,010 | IssuesEvent | 2020-04-12 16:03:22 | ooni/probe | https://api.github.com/repos/ooni/probe | closed | Feature request: Run All Tests button | effort/L ooni/probe-mobile priority/medium ux | We have received community requests to include a "Run All Tests" button in the revamped app (on both Android and iOS), to run all OONI Probe tests in sequence, by tapping one button (instead of having to manually run each test individually). | 1.0 | Feature request: Run All Tests button - We have received community requests to include a "Run All Tests" button in the revamped app (on both Android and iOS), to run all OONI Probe tests in sequence, by tapping one button (instead of having to manually run each test individually). | priority | feature request run all tests button we have received community requests to include a run all tests button in the revamped app on both android and ios to run all ooni probe tests in sequence by tapping one button instead of having to manually run each test individually | 1 |
820,981 | 30,798,127,426 | IssuesEvent | 2023-07-31 21:45:24 | PazerOP/tf2_bot_detector | https://api.github.com/repos/PazerOP/tf2_bot_detector | opened | [BUG] Bot Detector will not update | Type: Bug Priority: Medium | Any time I boot up the bot detector, it comes up with this Error. I don't think the bot detector even works while it's in this state. I saw others using the bot detector yesterday and, while I had mine open, it didn't do anything. Yes, I did boot TF2 from the bot detector.
``Update check failed:
- class tf2_bot_detector::http_error
- Failed to HTTP GET https://tf2bd-util.pazer.us:443/AppInstaller/LatestVersion.json?type=Public:
<UNKNOWN>(HTTP 523)``
Log:
[2023-07-31_22-35-39.log](https://github.com/PazerOP/tf2_bot_detector/files/12222482/2023-07-31_22-35-39.log)
I am currently on the latest version, since someone suggested to reinstall it and I've deleted every instance I am aware of, and it hasn't fixed the issue. | 1.0 | [BUG] Bot Detector will not update - Any time I boot up the bot detector, it comes up with this Error. I don't think the bot detector even works while it's in this state. I saw others using the bot detector yesterday and, while I had mine open, it didn't do anything. Yes, I did boot TF2 from the bot detector.
``Update check failed:
- class tf2_bot_detector::http_error
- Failed to HTTP GET https://tf2bd-util.pazer.us:443/AppInstaller/LatestVersion.json?type=Public:
<UNKNOWN>(HTTP 523)``
Log:
[2023-07-31_22-35-39.log](https://github.com/PazerOP/tf2_bot_detector/files/12222482/2023-07-31_22-35-39.log)
I am currently on the latest version, since someone suggested to reinstall it and I've deleted every instance I am aware of, and it hasn't fixed the issue. | priority | bot detector will not update any time i boot up the bot detector it comes up with this error i don t think the bot detector even works while it s in this state i saw others using the bot detector yesterday and while i had mine open it didn t do anything yes i did boot from the bot detector update check failed class bot detector http error failed to http get http log i am currently on the latest version since someone suggested to reinstall it and i ve deleted every instance i am aware of and it hasn t fixed the issue | 1 |
242,867 | 7,849,642,409 | IssuesEvent | 2018-06-20 05:04:01 | borela/naomi | https://api.github.com/repos/borela/naomi | closed | Goto Definition from JSX to component | enhancement priority: medium | Hey,
I would first like to say that I really like your package.
My react code (with fragments, ligatures, flow, es6, ..) looks really good now thanks to you.
The only thing I'm missing is to jump from JSX code to the used component.
Below is a simple example of a component. It would be great if I could simple navigate to the Input or Label component with Goto Definition.
Is this something that this package could do? Or should this be done by a different package?
Or maybe this can't be done?
```
import * as React from "react";
import {uniqueId} from "lodash";
import {Label} from "./core/Label";
import {Input} from "./core/Input";
const InputWithLabel = ({text, value, onChange, children}) => {
const id = uniqueId("id_")
let label = children
if (!label) label = text + ':'
return <>
<Label htmlFor={id}>{label}</Label>
<Input id={id}
type={type}
value={value}
onChange={onChange}
placeholder={text}/>
</>
}
```
| 1.0 | Goto Definition from JSX to component - Hey,
I would first like to say that I really like your package.
My react code (with fragments, ligatures, flow, es6, ..) looks really good now thanks to you.
The only thing I'm missing is to jump from JSX code to the used component.
Below is a simple example of a component. It would be great if I could simple navigate to the Input or Label component with Goto Definition.
Is this something that this package could do? Or should this be done by a different package?
Or maybe this can't be done?
```
import * as React from "react";
import {uniqueId} from "lodash";
import {Label} from "./core/Label";
import {Input} from "./core/Input";
const InputWithLabel = ({text, value, onChange, children}) => {
const id = uniqueId("id_")
let label = children
if (!label) label = text + ':'
return <>
<Label htmlFor={id}>{label}</Label>
<Input id={id}
type={type}
value={value}
onChange={onChange}
placeholder={text}/>
</>
}
```
| priority | goto definition from jsx to component hey i would first like to say that i really like your package my react code with fragments ligatures flow looks really good now thanks to you the only thing i m missing is to jump from jsx code to the used component below is a simple example of a component it would be great if i could simple navigate to the input or label component with goto definition is this something that this package could do or should this be done by a different package or maybe this can t be done import as react from react import uniqueid from lodash import label from core label import input from core input const inputwithlabel text value onchange children const id uniqueid id let label children if label label text return label input id id type type value value onchange onchange placeholder text | 1 |
649,346 | 21,280,069,093 | IssuesEvent | 2022-04-14 00:08:10 | BCDevOps/developer-experience | https://api.github.com/repos/BCDevOps/developer-experience | closed | Group config check items by type in AppAssessment report | medium priority app-development | CPU and Memory Requests, CPU and Memory limits, Probes, etc.
DoD
- [ ] Group table items if applicable (resource usage, security, etc) (high) | 1.0 | Group config check items by type in AppAssessment report - CPU and Memory Requests, CPU and Memory limits, Probes, etc.
DoD
- [ ] Group table items if applicable (resource usage, security, etc) (high) | priority | group config check items by type in appassessment report cpu and memory requests cpu and memory limits probes etc dod group table items if applicable resource usage security etc high | 1 |
311,889 | 9,540,011,972 | IssuesEvent | 2019-04-30 18:24:31 | RobotLocomotion/drake | https://api.github.com/repos/RobotLocomotion/drake | closed | PortSwitch primitive system | priority: medium team: russ type: feature request | As discussed with a few of you, I've got the "PortSwitch" system mostly coded up now, and will PR it this weekend. It's a simple system that offers many ports on the input, but passes-through only one of them to the output (with one additional input for the port_selector). I imagine this being useful for combining many low-level controller primitives (e.g. DifferentialIK, PlanEval, hybrid control, etc) with all of them wired up, but only one of them getting called thanks to this switch.
cc @kmuhlrad @pangtao22 | 1.0 | PortSwitch primitive system - As discussed with a few of you, I've got the "PortSwitch" system mostly coded up now, and will PR it this weekend. It's a simple system that offers many ports on the input, but passes-through only one of them to the output (with one additional input for the port_selector). I imagine this being useful for combining many low-level controller primitives (e.g. DifferentialIK, PlanEval, hybrid control, etc) with all of them wired up, but only one of them getting called thanks to this switch.
cc @kmuhlrad @pangtao22 | priority | portswitch primitive system as discussed with a few of you i ve got the portswitch system mostly coded up now and will pr it this weekend it s a simple system that offers many ports on the input but passes through only one of them to the output with one additional input for the port selector i imagine this being useful for combining many low level controller primitives e g differentialik planeval hybrid control etc with all of them wired up but only one of them getting called thanks to this switch cc kmuhlrad | 1 |
58,025 | 3,087,082,880 | IssuesEvent | 2015-08-25 09:15:34 | pavel-pimenov/flylinkdc-r5xx | https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx | opened | Индивидуальные настройки хабов не актуализируются при автоматическом пепеподключении | bug imported Priority-Medium | _From [[email protected]](https://code.google.com/u/101495626515388303633/) on October 01, 2013 02:05:31_
1. Подключиться к новому хабу на который вас не пускают по нику или тэгу, по размеру шары, либо по количеству слотов.
2. добавить хаб в избранные.
3. изменить индивидуальные настройки хаба по правой кнопке на вкладке так чтобы на хаб пустили, закрыть окно настроек.
4. дождаться автоматического переподключения к хабу.
5. хаб не пускает, т.к. ваши изменения не актуализировались.
6. нажать кнопку переподключения к хабу.
7. новые настройки вступают в силу и подключение происходит.
FlylinkDC++ r502 -rc2-x64 build 15562 Compiled on: 2013-09-28
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=1322_ | 1.0 | Индивидуальные настройки хабов не актуализируются при автоматическом пепеподключении - _From [[email protected]](https://code.google.com/u/101495626515388303633/) on October 01, 2013 02:05:31_
1. Подключиться к новому хабу на который вас не пускают по нику или тэгу, по размеру шары, либо по количеству слотов.
2. добавить хаб в избранные.
3. изменить индивидуальные настройки хаба по правой кнопке на вкладке так чтобы на хаб пустили, закрыть окно настроек.
4. дождаться автоматического переподключения к хабу.
5. хаб не пускает, т.к. ваши изменения не актуализировались.
6. нажать кнопку переподключения к хабу.
7. новые настройки вступают в силу и подключение происходит.
FlylinkDC++ r502 -rc2-x64 build 15562 Compiled on: 2013-09-28
_Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=1322_ | priority | индивидуальные настройки хабов не актуализируются при автоматическом пепеподключении from on october подключиться к новому хабу на который вас не пускают по нику или тэгу по размеру шары либо по количеству слотов добавить хаб в избранные изменить индивидуальные настройки хаба по правой кнопке на вкладке так чтобы на хаб пустили закрыть окно настроек дождаться автоматического переподключения к хабу хаб не пускает т к ваши изменения не актуализировались нажать кнопку переподключения к хабу новые настройки вступают в силу и подключение происходит flylinkdc build compiled on original issue | 1 |
84,207 | 3,655,236,394 | IssuesEvent | 2016-02-17 15:40:21 | ngageoint/hootenanny | https://api.github.com/repos/ngageoint/hootenanny | closed | Investigate conflation behavior between gdb and shapefile | Category: Core Priority: Medium Status: Ready For Review Type: Support | Investigate conflation behavior when input data ingested from gdb source vs. shapefile to see if there is a differences in the output. Data is from customer. | 1.0 | Investigate conflation behavior between gdb and shapefile - Investigate conflation behavior when input data ingested from gdb source vs. shapefile to see if there is a differences in the output. Data is from customer. | priority | investigate conflation behavior between gdb and shapefile investigate conflation behavior when input data ingested from gdb source vs shapefile to see if there is a differences in the output data is from customer | 1 |
399,332 | 11,747,473,657 | IssuesEvent | 2020-03-12 13:42:04 | onaio/reveal-frontend | https://api.github.com/repos/onaio/reveal-frontend | closed | Write a spec for more secure token store mechanism | Priority: Medium has pr | Following the discussion in https://github.com/onaio/reveal-frontend/issues/39, we now need to write a specification document that covers:
- The intermediary service that will handle oAuth2 access tokens aka "front end proxy"
- The reveal backend, if any | 1.0 | Write a spec for more secure token store mechanism - Following the discussion in https://github.com/onaio/reveal-frontend/issues/39, we now need to write a specification document that covers:
- The intermediary service that will handle oAuth2 access tokens aka "front end proxy"
- The reveal backend, if any | priority | write a spec for more secure token store mechanism following the discussion in we now need to write a specification document that covers the intermediary service that will handle access tokens aka front end proxy the reveal backend if any | 1 |
407,399 | 11,912,916,743 | IssuesEvent | 2020-03-31 11:04:40 | LiamTheProgrammer/liams-projects | https://api.github.com/repos/LiamTheProgrammer/liams-projects | closed | Background Static in Audio | js priority: medium webaudio worse in chrome worse in opera | When I play certain audio, there's noticeable background static. Any ideas on how to fix this? | 1.0 | Background Static in Audio - When I play certain audio, there's noticeable background static. Any ideas on how to fix this? | priority | background static in audio when i play certain audio there s noticeable background static any ideas on how to fix this | 1 |
63,706 | 3,197,706,866 | IssuesEvent | 2015-10-01 07:33:27 | cogciprocate/bismit | https://api.github.com/repos/cogciprocate/bismit | closed | Unmarry Ganglion, SDR, and Cortical Area Sizes | enhancement priority medium | Currently it is assumed (and checked) that the sizes of these things are all the same. It's now time to throw away that assumption and allow arbitrarily sized inputs and outputs (what we're now calling ganglions) for cortical areas.
TBD:
- Axon space.
- Synaptic range Limits.
- How to negotiate sizes for centering (probably new OclDimensions trait).
| 1.0 | Unmarry Ganglion, SDR, and Cortical Area Sizes - Currently it is assumed (and checked) that the sizes of these things are all the same. It's now time to throw away that assumption and allow arbitrarily sized inputs and outputs (what we're now calling ganglions) for cortical areas.
TBD:
- Axon space.
- Synaptic range Limits.
- How to negotiate sizes for centering (probably new OclDimensions trait).
| priority | unmarry ganglion sdr and cortical area sizes currently it is assumed and checked that the sizes of these things are all the same it s now time to throw away that assumption and allow arbitrarily sized inputs and outputs what we re now calling ganglions for cortical areas tbd axon space synaptic range limits how to negotiate sizes for centering probably new ocldimensions trait | 1 |
271,949 | 8,494,127,297 | IssuesEvent | 2018-10-28 18:39:00 | angular-buddies/angular-buddies | https://api.github.com/repos/angular-buddies/angular-buddies | opened | Create a 'format' architect custom builder | comp: prettier effort2: medium (days) priority: 2 (required) type: feature | ### Bug Report or Feature Request (mark with an `x`)
```
- [ ] bug report -> please search issues before submitting
- [x] feature request
```
### Package (mark with an `x`)
```
- [x] @angular-buddies/prettier
```
### Versions
@angular-buddies/prettier: v1.0.0-alpha.0
### Desired functionality
Add a new Angular Architect Custom Builder in order to replace the prettify script.
### Mention any other details that might be useful
See https://github.com/angular/angular-cli/tree/master/packages/angular_devkit/build_angular/src/tslint for some inspiration.
| 1.0 | Create a 'format' architect custom builder - ### Bug Report or Feature Request (mark with an `x`)
```
- [ ] bug report -> please search issues before submitting
- [x] feature request
```
### Package (mark with an `x`)
```
- [x] @angular-buddies/prettier
```
### Versions
@angular-buddies/prettier: v1.0.0-alpha.0
### Desired functionality
Add a new Angular Architect Custom Builder in order to replace the prettify script.
### Mention any other details that might be useful
See https://github.com/angular/angular-cli/tree/master/packages/angular_devkit/build_angular/src/tslint for some inspiration.
| priority | create a format architect custom builder bug report or feature request mark with an x bug report please search issues before submitting feature request package mark with an x angular buddies prettier versions angular buddies prettier alpha desired functionality add a new angular architect custom builder in order to replace the prettify script mention any other details that might be useful see for some inspiration | 1 |