Attached is the question doc and reading recommendation
assignment c++
need help
info in the doc
script
Turnitin®
Turnitin® enabledThis assignment will be submitted to Turnitin®.
Instructions
Create a VBScript script (w3_firstname_lastname.vbs) that takes one parameter (folder name) to do the following
1) List all files names, size, date created in the given folder
2) Parameter = Root Folder name
The script should check and validate the folder name
3) Optionally, you can save the list into a file “Results.txt” using the redirection operator “>”
4) Make sure to include comment block (flowerbox) in your code.
5) Sample run:-
C:entd261>cscript.exe w3_sammy_abaza.vbs “c:entd261” >results.txt
Submit your week 3 work in w3_firstname_lastname.txt (Please save the file as a text file and upload the text file here for final review.)
Mod 2 Java Discussion
Explain your understanding of methods, objects, classes, and the object-orientated nature of Java with the help of segments of codes. Avoid using the examples given in the course materials. You can write your own codes. Must be 150 words or more
response database
refer to the attached document
Management of an Information Technology Department
What would you do to improve the visibility and perception of an IT department?
Discuss how to improve visibility and perception of an IT department. Special emphasis should focus on how this would be measured.
Aligned Objectives
- Navigate ethical issues both personally and at the corporate level
- Describe the organization of an IT department
- Devise and correct perception and visibility issues in an IT department
- Develop procedures that protect a department when terminating staff
Response Parameters
- Each initial post should be 250 to 350 words.
Internal-External Disk Drives
1-How many internal and external disk drives can be connected to a laptop/desktop computer?
2-Let’s discuss how you would choose a disk drive for your computer?
qualitative proposal
qualitative proposal on vision in robotics
response database concepts integrity
refer to the attached documents with two posts please create a two responses for it and please attach references for each post.
computer science final
Submit all your answers in one notebook file as (final_yourname.ipynb)
Question 1 (80 pts)
Sentiment Analysis helps data scientists to analyze any kind of data i.e., Business, Politics, Social Media, etc., For example, the IMDb dataset “movie_data.csv” file contains 25,000 highly polar ‘positive’ (12500) and ‘negative’ (12500) IMDB movie reviews (label negative review as ‘0’ and positive review as ‘1’).
Similarly, “amazon_data.txt” and “yelp_data.txt” contain 1000 labeled negative review as ‘0’ and positive review as ‘1’
For further help, check the notebook sentiment_analysis.ipynb in Canvas and also explore the link: https://medium.com/@vasista/sentiment-analysis-using-svm338d418e3ff1
Answer the following:
a) Read all the above data files (.csv and .txt) in python Pandas DataFrame. For each dataset, make 70% as training and 30% as test sets.
b) By using both CountVectorizer and TfidfVectorizer separately of the sklearn library , perform the Logistic regression classification in the IMDb dataset and evaluate the accuracies in the test set.
c) Classify the Amazon dataset using Logistic Regression and Neural Network (two hidden layers) and compare the performances and show the confusion matrices.
d) Generate classification model for the Yelp dataset with K-NN algorithms. Fit and test the model for different values for K (from 1 to 5) using a for loop and record and plot the KNN’s testing accuracy in a variable (scores).
e) Generate prediction for the following reviews based Logistic regression classifier in Amazon dataset: Review1 = “SUPERB, I AM IN LOVE IN THIS PHONE” Review 2 = “Do not purchase this product.
My cell phone blast when I switched the charger”
Question 2 (60 pts)
The 20 Newsgroups data set is a collection of approximately 20,000 newsgroup documents, partitioned (nearly) evenly across 20 different newsgroups. This data set is in-built in scikit, so you don’t need to download it explicitly.You can check the code here:
https://towardsdatascience.com/machine-learning-nlp-text-classification-using-scikit-learn-python-and-nltk-c52b92a7c73a
to load the data set directly in notebook (this might take few minutes, so patience). For example,
from sklearn.datasets import fetch_20newsgroups
twenty_train = fetch_20newsgroups(subset=’train’, shuffle=True)
a)By using bothCountVectorizer and TfidfVectorizer separately of the sklearn library, perform the Logistic regression classificationon the training set and show the confusing matrix and accuracy by predicting the class labels in the test set.
b)Perform a Logistic Regression classification and show the accuracy of the test set.
c)Perform a K-means Clustering in the training set with K =20
d)Plot the accuracy (Elbow method) of different cluster sizes (5, 10, 15, 20, 25, 30)and determine the best cluster size.
Question 3 (60 pts)
The Medical dataset “image_caption.txt”contains captions for 1000 images (ImageID).Let’s build a small search engine (you may explore to get some help: https://towardsdatascience.com/create-a-simple-search-engine-using-python-412587619ff5and https://www.machinelearningplus.com/nlp/cosine-similarity/) by performing the following:
a)Read all the data files in python Pandas DataFrame.
b)Perform the necessary pre-processing task (e.g.,punctuation, numbers,stop word removal, etc.)
c)Create Term-Document Matrix with TF-IDF weighting
d)Calculate the similarity using cosine similarity and show the top ranked ten (10) images Based on the following query
“CT images of chest showing ground glass opacity”