-
Notifications
You must be signed in to change notification settings - Fork 4
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #70 from Promptly-Technologies-LLC/28-refactor-and…
…-improve-the-documentation-website 28 refactor and improve the documentation website
- Loading branch information
Showing
16 changed files
with
2,869 additions
and
203 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,68 @@ | ||
--- | ||
title: "Working with Databases" | ||
--- | ||
|
||
## Understanding IMF Databases | ||
|
||
The IMF serves many different databases through its API, and the API needs to know which of these many databases you're requesting data from. Before you can fetch any data, you'll need to: | ||
|
||
1. Get a list of available databases | ||
2. Find the database ID for the data you want | ||
|
||
Then you can use that database ID to fetch the data. | ||
|
||
## Fetching the Database List | ||
|
||
### Fetching an Index of Databases with the `imf_databases` Function | ||
|
||
To obtain the list of available databases and their corresponding IDs, use `imf_databases`: | ||
|
||
``` {python} | ||
import imfp | ||
#Fetch the list of databases available through the IMF API | ||
databases = imfp.imf_databases() | ||
databases.head() | ||
``` | ||
|
||
|
||
This function returns the IMF’s listing of 259 databases available through the API. (In reality, a few of the listed databases are defunct and not actually available. The databases FAS_2015, GFS01, FM202010, APDREO202010, AFRREO202010, WHDREO202010, BOPAGG_2020, and DOT_2020Q1 were unavailable as of last check.) | ||
|
||
## Exploring the Database List | ||
|
||
To view and explore the database list, it’s possible to explore subsets of the data frame by row number with `databases.loc`: | ||
|
||
``` {python} | ||
# View a subset consisting of rows 5 through 9 | ||
databases.loc[5:9] | ||
``` | ||
|
||
|
||
Or, if you already know which database you want, you can fetch the corresponding code by searching for a string match using `str.contains` and subsetting the data frame for matching rows. For instance, here’s how to search for commodities data: | ||
|
||
``` {python} | ||
databases[databases['description'].str.contains("Commodity")] | ||
``` | ||
|
||
See also [Working with Large Data Frames](usage.qmd#working-with-large-data-frames) for sample code showing how to view the full contents of the data frame in a browser window. | ||
|
||
## Best Practices | ||
|
||
1. **Cache the Database List**: The database list rarely changes. Consider saving it locally if you'll be making multiple queries. See [Caching Strategy](rate_limits.qmd#caching-strategy) for sample code. | ||
|
||
2. **Search Strategically**: Use specific search terms to find relevant databases. For example: | ||
|
||
- "Price" for price indices | ||
- "Trade" for trade statistics | ||
- "Financial" for financial data | ||
|
||
3. **Use a Browser Viewer**: See [Working with Large Data Frames](usage.qmd#working-with-large-data-frames) for sample code showing how to view the full contents of the data frame in a browser window. | ||
|
||
4. **Note Database IDs**: Once you find a database you'll use frequently, note its database ID for future reference. | ||
|
||
## Next Steps | ||
|
||
Once you've identified the database you want to use, you'll need to: | ||
|
||
1. Get the list of parameters for that database (see [Parameters](parameters.qmd)) | ||
2. Use those parameters to fetch your data (see [Datasets](datasets.qmd)) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,80 @@ | ||
--- | ||
title: "Requesting Datasets" | ||
--- | ||
|
||
## Making a Request | ||
|
||
To retrieve data from an IMF database, you'll need the database ID and any relevant [filter parameters](parameters.qmd). Here's a basic example using the Primary Commodity Price System (PCPS) database: | ||
|
||
``` {python} | ||
import imfp | ||
# Get parameters and their valid codes | ||
params = imfp.imf_parameters("PCPS") | ||
# Fetch annual coal price index data | ||
df = imfp.imf_dataset( | ||
database_id="PCPS", | ||
freq=["A"], # Annual frequency | ||
commodity=["PCOAL"], # Coal prices | ||
unit_measure=["IX"], # Index | ||
start_year=2000, | ||
end_year=2015 | ||
) | ||
``` | ||
|
||
This example creates two objects we'll use in the following sections: | ||
|
||
- `params`: A dictionary of parameters and their valid codes | ||
- `df`: The retrieved data frame containing our requested data | ||
|
||
## Decoding Returned Data | ||
|
||
When you retrieve data using `imf_dataset`, the returned data frame contains columns that correspond to the parameters you specified in your request. However, these columns use input codes (short identifiers) rather than human-readable descriptions. To make your data more interpretable, you can replace these codes with their corresponding text descriptions using the parameter information from `imf_parameters`, so that codes like "A" (Annual) or "W00" (World) become self-explanatory labels. | ||
|
||
For example, suppose we want to decode the `freq` (frequency), `ref_area` (geographical area), and `unit_measure` (unit) columns in our dataset. We'll merge the parameter descriptions into our data frame: | ||
|
||
``` {python} | ||
# Decode frequency codes (e.g., "A" → "Annual") | ||
df = df.merge( | ||
# Select code-description pairs | ||
params['freq'][['input_code', 'description']], | ||
# Match codes in the data frame | ||
left_on='freq', | ||
# ...to codes in the parameter data | ||
right_on='input_code', | ||
# Keep all data rows | ||
how='left' | ||
).drop(columns=['freq', 'input_code'] | ||
).rename(columns={"description": "freq"}) | ||
# Decode geographic area codes (e.g., "W00" → "World") | ||
df = df.merge( | ||
params['ref_area'][['input_code', 'description']], | ||
left_on='ref_area', | ||
right_on='input_code', | ||
how='left' | ||
).drop(columns=['ref_area', 'input_code'] | ||
).rename(columns={"description":"ref_area"}) | ||
# Decode unit codes (e.g., "IX" → "Index") | ||
df = df.merge( | ||
params['unit_measure'][['input_code', 'description']], | ||
left_on='unit_measure', | ||
right_on='input_code', | ||
how='left' | ||
).drop(columns=['unit_measure', 'input_code'] | ||
).rename(columns={"description":"unit_measure"}) | ||
df.head() | ||
``` | ||
|
||
After decoding, the data frame is much more human-interpretable. This transformation makes the data more accessible for analysis and presentation, while maintaining all the original information. | ||
|
||
## Understanding the Data Frame | ||
|
||
Also note that the returned data frame has additional mysterious-looking codes as values in some columns. | ||
|
||
Codes in the `time_format` column are ISO 8601 duration codes. In this case, “P1Y” means “periods of 1 year.” See [Time Period Conversion](usage.qmd#time-period-conversion) for more information on reconciling time periods. | ||
|
||
The `unit_mult` column represents the number of zeroes you should add to the value column. For instance, if value is in millions, then the unit multiplier will be 6. If in billions, then the unit multiplier will be 9. See [Unit Multiplier Adjustment](usage.qmd#unit-multiplier-adjustment) for more information on reconciling units. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,59 @@ | ||
--- | ||
title: "Installation" | ||
--- | ||
|
||
## Prerequisites | ||
|
||
To install the latest version of `imfp`, you will need to have [Python 3.10 or later](https://www.python.org/downloads/) installed on your system. | ||
|
||
If you don't already have Python, we recommend installing [the `uv` package manager](https://astral.sh/setup-uv/) and installing Python with `uv python install`. | ||
|
||
## Installation | ||
|
||
To install the latest stable `imfp` wheel from PyPi using pip: | ||
|
||
``` bash | ||
pip install --upgrade imfp | ||
``` | ||
|
||
Alternatively, to install from the source code on Github, you can use the following command: | ||
|
||
``` bash | ||
pip install --upgrade git+https://github.com/Promptly-Technologies-LLC/imfp.git | ||
``` | ||
|
||
You can then import the package in your Python script: | ||
|
||
``` python | ||
import imfp | ||
``` | ||
|
||
## Suggested Dependencies for Data Analysis | ||
|
||
`imfp` outputs data in a `pandas` data frame, so you will want to use the `pandas` package (which is installed with `imfp`) for its functions for viewing and manipulating this object type. For data visualization, we recommend installing these additional packages: | ||
|
||
``` bash | ||
pip install -q matplotlib seaborn | ||
``` | ||
|
||
You can then import these packages in your Python script: | ||
|
||
``` python | ||
import pandas as pd | ||
import matplotlib.pyplot as plt | ||
import seaborn as sns | ||
``` | ||
|
||
## Development Installation | ||
|
||
To get started with development of `imfp`, | ||
|
||
1. Fork and clone the repository | ||
2. Install [uv](https://astral.sh/setup-uv/) with `curl -LsSf https://astral.sh/uv/install.sh | sh` | ||
3. Install the dependencies with `uv sync` | ||
4. Install a git pre-commit hook to enforce conventional commits: | ||
``` bash | ||
curl -o- https://raw.githubusercontent.com/tapsellorg/conventional-commits-git-hook/master/scripts/install.sh | sh | ||
``` | ||
|
||
To edit and preview the documentation, you'll also want to install the [Quarto CLI tool](https://quarto.org/docs/download/). |
Oops, something went wrong.