Questions tagged [linear-regression]

For questions related to the theory or application of linear regression.

Linear regression is an approach for finding an optimal set of parameter values $P$ to tune a parametric model $\mathbb{M}_P$ given a data set containing associated scalar dependent values $y$ and scalar independent values $x$ or independent vector values $\vec{x}$.

61 questions
8
votes
5 answers

Linear regression: why is distance *squared* used as an error metric?

Usually when performing linear regression predictions and gradient descent, the measure of the level of error for a particular line will be measured by the sum of the squared-distance values. Why distance squared? In most of the explanations I…
Alpha
  • 458
  • 3
  • 12
8
votes
1 answer

Is there a connection between the bias term in a linear regression model and the bias that can lead to under-fitting?

Here is a linear regression model $$y = mx + b,$$ where $b$ is known as $y$-intercept, but also known as the bias [1], $m$ is the slope, and $x$ is the feature vector. As I understood, in machine learning, there is also the bias that can cause the…
6
votes
2 answers

Is there a machine learning algorithm to find similar sales patterns?

I have a dataset as follows (and the table extends to include an extra 146 columns for companies 4-149) Is there an algorithm I could use effectively to find similar patterns in sales from the other companies when compared to my company? I thought…
5
votes
1 answer

Can we use the recursive least squares as a learning algorithm to an ADALINE?

I'm new to neural network, I study electrical engineering, and I just started working with ADALINEs. I use Matlab, and in their Documentation they cite : However, here the LMS (least mean squares) learning rule, which is much more powerful than…
Carter Nolan
  • 151
  • 5
4
votes
1 answer

Regression on extreme values

I have a data set that looks like this: I would like to estimate a relationship between x-values and the corresponding 5% extreme y-values, something that might look like that : Do you have an idea of an algorithm that might help me for this ? I…
4
votes
3 answers

What to do if CNN cannot overfit a training set on adding dropout?

I have been trying to use CNN for a regression problem. I followed the standard recommendation of disabling dropout and overfitting a small training set prior to trying for generalization. With a 10 layer deep architecture, I could overfit a…
3
votes
2 answers

What makes a machine learning algorithm a low variance one or a high variance one?

Some examples of low-variance machine learning algorithms include linear regression, linear discriminant analysis, and logistic regression. Examples of high-variance machine learning algorithms include decision trees, k-nearest neighbors, and…
3
votes
2 answers

Matrix Dimension for Linear regression coefficients

While reading about least squares implementation for machine learning I came across this passage in the following two photos: Perhaps I’m misinterpreting the meaning of beta but if X^T has dimension 1 x p and beta has dimension p x K, then hat{Y}…
Hanzy
  • 499
  • 3
  • 10
3
votes
3 answers

Is Deep Learning the repeated application of Linear Regression?

Is Deep Learning the repeated application of Linear Regression?
3
votes
1 answer

With gradient descent w/MSE on a regression, must/should every Epoch use the exact same training samples?

Let's say I've got a training sample set of 1 million records, which I pull batches of 100 from to train a basic regression model using gradient descent and MSE as a loss function. Assume test and cross validation samples have already been…
Ray
  • 131
  • 4
2
votes
1 answer

Does the correlation between inputs affect the model performance?

I'm currently working on a regression problem and I have 10 inputs/attributes. What should I do if there are correlations between different features of the input data? Does the correlation between inputs affect the performance (e.g. accuracy) of…
2
votes
1 answer

Understanding the math behind using maximum likelihood for linear regression

I understand both terms, linear regression and maximum likelihood, but, when it comes to the math, I am totally lost. So I am reading this article The Principle of Maximum Likelihood (by Suriyadeepan Ramamoorthy). It is really well written, but, as…
xava
  • 423
  • 1
  • 3
  • 9
2
votes
2 answers

How is direction of weight change determined by Gradient Descent algorithm

The result of gradient descent algorithm is a vector. So how does this algorithm decide the direction for weight change? We Give hyperparameters for step size. But how is the vector direction for weight change, for the purpose of reducing the Loss…
Amey
  • 123
  • 6
2
votes
3 answers

Understanding a few terms in Andrew Ng's definition of the cost function for linear regression

I have completed week 1 of Andrew Ng's course. I understand that the cost function for linear regression is defined as $J (\theta_0, \theta_1) = 1/2m*\sum (h(x)-y)^2$ and the $h$ is defined as $h(x) = \theta_0 + \theta_1(x)$. But I don't understand…
2
votes
1 answer

How to choose evaluation functions for features, when network effects are in place (multi-agent systems)?

So, I have this huge amount of data, which has 7 vector features (float from 0 to 1). I am trying to build a kind of recommendation system, with a twist (it uses agents and negotiations and narratives; narratives meaning, that there will be temporal…
Ahti Ahde
  • 278
  • 2
  • 7
1
2 3 4 5