Gradient Descent
Gradient Descent is a key optimization technique used to minimize a function, often the error or loss in models. Begin with an initial guess for the model parameters, it calculates the gradient (derivative) and moves in its opposite direction. Over iterations, the parameters converge to a minimum value, refining the model's accuracy. Commonly used in training neural networks, it is akin to finding the lowest valley point by taking small downhill steps.