-
Notifications
You must be signed in to change notification settings - Fork 0
/
README.txt
29 lines (22 loc) · 1.18 KB
/
README.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
Modules Installed
The libraries installed for the project were the folowing ones.
1.)Sklearn.linear model
2.)logistic regression
3.)pylab
4.)numpy
5.)scipy
6.) unicode data
these should be imported for both the files.
our dataset is TWITTER.txt which is retrieved.
Step 1:- Tweets.txt is a rest api file to retrieve tweets from the twitter handle , and after that the tweets will be stored in tweets.txt file.
Step 2:- these tweets were retrieved and put into "TWITTER".txt file, this file contains the classified tweets.
Step 3:- "TESTINGTWITTER".txt is a file created by us to test the tweets for the recommendations.
IN SVM
we need to filter out the input tweets as,
3 separate files for positive tweets, negative tweets and advertisements will be created after svm file is run.
In Logistic Regression,
Once the file is created,
We input the Training set , the testing tweets and the created positive, negative and advertisements .txt file in the code.
REsult
From the Logistic regression file, we receive the Recommendations. py file as the output which is the final output of our system as the recommended universities after comparing with the advertisements.
Thank you