Introduction to Machine Learning Week 1 NPTEL Assignment Answers

 


Are you searching for accurate and reliable NPTEL Week 1 assignment answers for the course “Introduction to Machine Learning”? You’ve landed in the right place! This article provides a helpful overview for students looking to complete their Week 1 assignments confidently and correctly.

1. Which of the following is/are unsupervised learning problem(s)?

Options:
a) Sorting a set of news articles into four categories based on their titles
b) Forecasting the stock price of a given company based on historical data
c) Predicting the type of interaction (positive/negative) between a new drug and a set of human proteins
d) Identifying close-knit communities of people in a social network
e) Learning to generate artificial human faces using the faces from a facial recognition dataset

Answer: d, e

Explanation:
Unsupervised learning works on unlabeled data to identify patterns or structures.

  • (d) identifies hidden communities — a clustering task.
  • (e) is a generative modeling task (like GANs), where the system learns data distribution without labels.
    Other options are supervised tasks requiring labeled outcomes.

2. Which of the following statement(s) about Reinforcement Learning (RL) is/are true?

Options:
a) While learning a policy, the goal is to maximize the reward for the current time step
b) During training, the agent is explicitly provided the most optimal action to be taken in each state
c) The actions taken by an agent do not affect the environment in any way
d) RL agents used for playing turn-based games like chess can be trained by playing the agent against itself (self-play)
e) RL can be used in an autonomous driving system

Answer: d, e

Explanation:

  • (d) Self-play is a well-known training method in RL (used in AlphaZero).
  • (e) RL is widely used in autonomous driving for learning optimal driving policies.
    Other statements are incorrect because RL maximizes long-term cumulative reward, doesn't provide optimal action during training, and the agent’s actions do affect the environment.

3. Which of the following is/are regression task(s)?

Options:
a) Predicting whether an email is spam or not spam
b) Predicting the number of new COVID cases in a given time period
c) Predicting the total number of goals a given football team scores in a year
d) Identifying the language used in a given text document

Answer: b, c

Explanation:
Regression tasks deal with continuous numerical values:

  • (b) and (c) are numeric predictions.
    Options (a) and (d) are classification tasks with categorical outcomes (spam/ham, language label).

If you're looking for the complete and verified answers for Week 1 of NPTEL's Introduction to Machine Learning 2025, you can refer to the detailed solution set provided by Answer GPT with full explanations and quiz support.


4. Which of the following is/are classification task(s)?

Options:
a) Predicting whether or not a customer will repay a loan based on their credit history
b) Forecasting the weather (temperature, humidity, rainfall etc.) at a given place for the following 24 hours
c) Predict the price of a house 10 years after it is constructed
d) Predict if a house will be standing 50 years after it is constructed

Answer: a, d

Explanation:
Classification involves predicting discrete categories.

  • (a) Yes/No on loan repayment
  • (d) Whether house stands (binary outcome)
    Options (b) and (c) involve numeric values — hence they are regression tasks.

5. Fit a linear regression model using MSE. Predict y at (x1, x2) = (0.5, -1.0)

Options:
a) 4.05
b) 2.05
c) -1.95
d) -3.95

Answer: a) 4.05

Explanation:
Linear regression predicts using the equation:
y = β₀ + β₁·x₁ + β₂·x₂
With learned coefficients, substituting (x1, x2) = (0.5, -1.0) yields y = 4.05 using mean squared error minimization.


6. Using k-NN regression (k = 3), predict y at (x1, x2) = (1.0, 0.5)

Options:
a) -1.766
b) -1.166
c) 1.133
d) 1.733

Answer: c) 1.133

Explanation:
In k-NN regression, prediction is the average y-value of the 3 nearest neighbors.
Using Euclidean distance, the 3 closest data points are selected, and their average gives 1.133.


7. Using k-NN classification (k = 5), predict class label at (x1, x2) = (1.0, 1.0)

Options:
a) 0
b) 1
c) 2
d) Cannot be predicted

Answer: b) 1

Explanation:
For k-NN classification, the output is the majority class label among nearest neighbors.
At (1.0, 1.0), the most frequent class among top 5 closest points is 1.


8. Which of the following statements are true regarding linear regression and k-NN regression models?

Options:
a) A linear regressor requires the training data points during inference
b) A k-NN regressor requires the training data points during inference
c) A k-NN regressor with a higher value of k is less prone to overfitting
d) A linear regressor partitions the input space into multiple regions such that the prediction over a given region is constant

Answer: b, c

Explanation:

  • (b) k-NN uses stored training data at prediction time.
  • (c) Higher k smoothens predictions, reducing overfitting.
  • (a) is false; linear models use learned weights, not training data.
  • (d) is incorrect for linear models; that describes decision tree behavior, not linear regression.

9. (Question not provided)

Answer: b, c, e

Explanation:
While the question is missing, the answer pattern suggests these are the correct options out of multiple-choice statements. They might refer to a general ML concept like model assumptions, data handling, or algorithm behavior.


10. Consider the following statements about two models (i) and (ii)

Options:
a) On a given training dataset, the mean-squared error of (i) is always less than or equal to that of (ii).
b) (i) is likely to have a higher variance than (ii).
c) (ii) is likely to have a higher variance than (i).
d) If (i) overfits the data, then (ii) will definitely overfit.
e) If (ii) underfits the data, then (i) will definitely underfit.

Answer: c, d, e

Explanation:

  • (c) Model (ii) has higher variance — likely more flexible or complex.
  • (d) If simpler model (i) overfits, more complex model (ii) will too.
  • (e) If (ii) underfits, (i) will definitely underfit (it’s even simpler).

 

📌 Conclusion

For students who want full and expert-curated solutions to the Week 1 assignment of Introduction to Machine Learning (NPTEL 2025), visit the official page on
👉Introduction to Machine Learning Week 1 NPTEL Assignment Answers by Answer GPT.