Datasets:
Size:
10K<n<100K
License:
license: apache-2.0 | |
task_categories: | |
- question-answering | |
- text-classification | |
language: | |
- en | |
- fr | |
- es | |
size_categories: | |
- 10K<n<100K | |
tags: | |
- climate | |
- policy | |
This dataset is curated by [GIZ Data Service Center](https://www.giz.de/expertise/html/63018.html). The source dataset for this | |
comes from Internal GIZ team (IKI_Tracs) and [Climatewatchdata](https://www.climatewatchdata.org/data-explorer/historical-emissions?historical-emissions-data-sources=climate-watch&historical-emissions-gases=all-ghg&historical-emissions-regions=All%20Selected&historical-emissions-sectors=total-including-lucf%2Ctotal-including-lucf&page=1), | |
where Climatewatch has analysed Intended nationally determined contribution (INDC), NDC and Revised/Updated NDC of the countries to answer some important questions related to Climate change. | |
Specifications | |
- Dataset size: ~85k | |
- Language: English, French, Spanish | |
# Columns | |
- **index (type:int)**: Unique Response ID | |
- **ResponseText (type:str)**: Annotated answer/response to query | |
- **Alpha3 (type:str)**:country alpha-3 code (ISO 3166) | |
- **Country (type:str)**: country name | |
- **Document (type:str)**:Name of type of Policy document from which response is provided | |
- **IkiInfo (type: list[dict])**: Responsetext can appear/occur as answer/response for different kind of query, therefore in that case we preserve all raw information for each occurences. | |
Each dictionary object represents one such occurrence for response and provides all raw metadata for an occurrence.In case of None, it means | |
the entry belongs to Climate data and not IKI Tracs data) | |
- **CWInfo (type: list[dict])**:Responsetext can appear/occur as answer/response for different kind of query, therefore in that case we preserve all raw information for each occurences. | |
Each dictionary object represents one such occurrence for response and provides all raw metadata for an occurrence. In case of None, it means | |
the entry belongs to Iki tracs data and not CW) | |
- **Source (type:list[str])**: Contains the name of source | |
- **Target (type:list)**: Value at index 0, represents number of times ResponseText appears as 'Target', and not-Target (value at index 1 ) | |
- **Action (type:list)**: Value at index 0, represents number of times ResponseText appears as 'Action', and not-Action (value at index 1 ) | |
- **Policies_Plans (type:list)**: Value at index 0, represents number of times ResponseText appears as 'Policy/Plan', and not-Policy/Plan (value at index 1 ) | |
- **Mitigation (type:list)**: Value at index 0, represents number of times ResponseText appears in reference to Mititgation and not-Mitigation (value at index 1 ) | |
- **Adaptation (type:list)**: Value at index 0, represents number of times ResponseText appears in reference to Adaptation and not-Adaptation (value at index 1 ) | |
- **language (type:str)**: ISO code of language of ResponseText. | |
- **context (type:list[str])**: List of paragraphs/textchunk from the document of country which contains the ResponseText. These results are based on Okapi bm25 retriever, | |
and hence dont represent ground truth. | |
- **context_lang (type:str)**: ISO code of language of ResponseText. In some cases context and ResponseText are different as annotator have provided the translated response, rather than original text from document. | |
- **matching_words(type:list[list[[words]])**:For each context, finds the matching words from ResponseText (stopwords not considered). | |
- **response_words(type:list[words])**:Tokens/Words from ResponseText (stopwords not considered) | |
- **context_wordcount (type:list[int])**: Number of tokens/words in each context (remember context itself is list of multiple strings, and stopwords not considered) | |
- **strategy (type:str)**: Can take either of *small,medium,large* value. Represents the length of paragraphs/textchunk considered for finding the right context for ResponseText | |
- **match_onresponse (type:list[float])**: Percentage of overlapping words between Response and context with respect to the length of ResponseText. | |
- **candidate (type:list[list[int]])**: Candidate within context which corresponds (fuzzy matching/similarity) to ResponseText. Value at index(0,1) represents (start,end) of string within context | |
- **fetched_text (type:list[str])**: Candidate within context which corresponds (fuzzy matching/similarity) to ResponseText. | |
- **response_translated(type:str)**:Translated ResponseText | |
- **context_translated(type:str)**: Translated Context | |
- **candidate_translated(type:str)**: Translated Candidate index values (check column 'candidate') | |
- **fetched_text_translated(type:str)**: Translated Candidates (check column 'candidate') | |
- **QA_data(type:dict)**: Metadata about ResponseText, highlighting nature of query to which ResponseText corresponds as 'answer/response' | |
- **match_onanswer (type:list[float])**: Represents percentage match between Response and candidate text ( from statistics it is recommended to keep only values above 0.3% as | |
answer and consider the context for 'No answer' for SQUAD2 data format) |