-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding my scraper for all available imf dbs #1
base: main
Are you sure you want to change the base?
Conversation
Hi, Zay. Wow, you built quite the elaborate scraper! First thing you need to do is delete the pycache repos from your fork. I actually just added these to .gitignore, so if you do a git pull to your fork and then a git pull from your fork to your computer, you will get the updated .gitignore file on your computer, and then your next push won't include these files. (You don't need to update your pull request. Your PR will automatically update as you edit the files on your fork.) Second, let's talk about project organization. Standard layout for a Python library like However, based on the "principle of single responsibility," I actually think there are three projects here. First, you have some classes that seem to be for general web scraping purposes, using Selenium and such. Those probably should be their own Python library. Second, there's imfp, a library with functions for scraping the IMF API. And thirdly, you have the application you've built for scraping all IMF databases. I'd love to see you create your own repos for your scraper classes and your IMF scraping application, and I'd be happy to help you get them set up and link to your scraper from the imfp documentation. I'd also be happy to accept some smaller pull requests for imfp, if you're looking to be added as a contributor for your resume. I have a list of "Planned features" in the README, and I will add you as a contributor if you take care of any one of these for me. Adding different exception types might be the easiest one for you to tackle. If you tell ChatGPT about an error case and ask it what would be the most appropriate exception type from either the Python standard library or from the libraries imported by imfp, it will tell you which ones to use, and you can implement them in imfp. That would be enough to get added as a contributor. If you make larger contributions, I'd be happy to also add you as an author. |
hello, Chris
I am glad you took it into consideration i would love for us maybe to have a meeting or a video call where you can explain to me in details what do you want from me i would like to be a contributor or even author.
Do not make adjustment to the package work on something else because i will be improving it when i can.
The problem is i contracted covid and its wrecking me i might have to go to the hospital for a month in order to get better that why i am replying late sorry for that.
I want to work on IMFP as soon as i am out so keep that in mind.
I will contact you as soon as i get better.
Thanks man.
…________________________________
From: Christopher Carroll Smith ***@***.***>
Sent: 15 May 2023 15:44
To: chriscarrollsmith/imfp ***@***.***>
Cc: Zaybak ***@***.***>; Author ***@***.***>
Subject: Re: [chriscarrollsmith/imfp] Adding my scraper for all available imf dbs (PR #1)
Hi, Zay. Wow, you built quite the elaborate scraper!
First thing you need to do is delete the pycache repos from your fork. I actually just added these to .gitignore, so if you do a git pull, you will get the updated .gitignore, and then your next push won't include these files. (You don't need to update your pull request. Your PR will automatically update as you edit the files on your fork.)
Second, let's talk about project organization. Standard layout for a Python library like imfp is to have all the .py code inside the library_name/library_name folder (in this case, imfp/imfp). So any Python code you add to the library should go there.
However, based on the "principle of single responsibility," I actually think there are three projects here.
First, you have some classes that seem to be for general web scraping purposes, using Selenium and such. Those probably should be their own Python library. Second, there's imfp, a library with functions for scraping the IMF API. And thirdly, you have the application you've built for scraping all IMF databases. I'd love to see you create your own repos for your scraper classes and your IMF scraping application, and I'd be happy to help you get them set up and link to your scraper from the imfp documentation.
I'd also be happy to accept some smaller pull requests for imfp, if you're looking to be added as a contributor for your resume. I have a list of "Planned features" in the README, and I will add you as a contributor if you take care of any one of these for me. Adding different exception types might be the easiest one for you to tackle. If you tell ChatGPT about an error case and ask it what would be the most appropriate exception type from either the Python standard library or from the libraries imported by imfp, it will tell you which ones to use, and you can implement them in imfp. That would be enough to get added as a contributor.
If you make larger contributions, I'd be happy to also add you as an author.
—
Reply to this email directly, view it on GitHub<#1 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ARLF2UUDH5QICEYVI7RQKXTXGIQKXANCNFSM6AAAAAAYB7IHZY>.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Oh man, so sorry you got Covid! I'm for sure up for a video call. Let me know when you feel up to it. |
IMF_DBs_Scraper has IMFP_Data directory which contains imf_main.py to run
this code utilizes my classes inside scraper module.