Python实现梯度下降(机器学习)

时间:2013-11-10 12:51:58

标签: python machine-learning gradient-descent

我试图在python中实现梯度下降但是成本J似乎正在增加而不管lambda和alpha值,我无法弄清楚这里的问题是什么。如果有人可以帮我解决这个问题,那就太好了。输入是具有相同尺寸的矩阵Y和R. Y是电影x用户的矩阵,R只是说用户是否评价了电影。

#Recommender system ML
import numpy
import scipy.io

def gradientDescent(y,r):
        (nm,nu) = numpy.shape(y)          
        x =  numpy.mat(numpy.random.randn(nm,10))
        theta =  numpy.mat(numpy.random.randn(nu,10))
        for i in range(1,10):
                (x,theta) = costFunc(x,theta,y,r)


def costFunc(x,theta,y,r):

        X_tmp = numpy.power(x , 2)
        Theta_tmp = numpy.power(theta , 2)
        lmbda = 0.1
        reg = ((lmbda/2) * numpy.sum(Theta_tmp))+ ((lmbda/2)*numpy.sum(X_tmp))
        ans = numpy.multiply(numpy.power(((theta * x.T).T - y),2) , r)
        res = (0.5 * numpy.sum(ans))+reg
        print "J:",res
        print "reg:",reg
        (nm,nu) = numpy.shape(y)          
        X_grad = numpy.mat(numpy.zeros((nm,10)));
        Theta_grad = numpy.mat(numpy.zeros((nu,10)));
        alpha = 0.1
#       [m f] = size(X);
        (m,f) = numpy.shape(x);

        for i in range(0,m):                
                for k in range(0,f):
                        tmp = 0
#                       X_grad(i,k) += (((theta * x'(:,i)) - y(i,:)').*r(i,:)')' * theta(:,k);
                        tmp += ((numpy.multiply(((theta * x.T[:,i]) - y[i,:].T),r[i,:].T)).T) * theta[:,k];
                        tmp += (lmbda*x[i,k]);
                        X_grad[i,k] -= (alpha*tmp)

#                       X_grad(i,k) += (lambda*X(i,k));


#       [m f] = size(Theta); 
        (m,f) = numpy.shape(theta);


        for i in range(0,m):                
                for k in range(0,f):
                        tmp = 0
#                       Theta_grad(i,k) += (((theta(i,:) * x') - y(:,i)').*r(:,i)') * x(:,k);
                        tmp += (numpy.multiply(((theta[i,:] * x.T) - y[:,i].T),r[:,i].T)) * x[:,k];
                        tmp += (lmbda*theta[i,k]);
                        Theta_grad[i,k] -= (alpha*tmp)

#                        Theta_grad(i,k) += (lambda*Theta(i,k));

        return(X_grad,Theta_grad)

def main():
        mat1 = scipy.io.loadmat("C:\Users\ROHIT\Machine Learning\Coursera\mlclass-ex8\ex8_movies.mat")   
        Y = mat1['Y']
        R = mat1['R']   
        r = numpy.mat(R)
        y = numpy.mat(Y)   
        gradientDescent(y,r)

#if __init__ == '__main__':
main()

1 个答案:

答案 0 :(得分:0)

我没有检查整个代码逻辑,但假设它是正确的,你的costfunc应该返回成本函数的渐变,并且在这些行中:

for i in range(1,10):
     (x,theta) = costFunc(x,theta,y,r)

你用其渐变覆盖x和theta的最后一个值,而渐变是变化的度量,所以你应该向相反方向移动(减去渐变而不是覆盖值):

for i in range(1,10):
     (x,theta) -= costFunc(x,theta,y,r)

但似乎您已将减号分配到costfunc的渐变中,因此您应该添加此值

for i in range(1,10):
     (x,theta) += costFunc(x,theta,y,r)
相关问题