AI Engineering Glossary
Search

Distillation

Knowledge Distillation is a technique used to create smaller, more efficient models from larger, complex models. In this process, the larger model, called the teacher, trains a smaller model, known as the student, by passing it simplified or processed outputs. This allows for preserving the performance and accuracy of the larger model while reducing computational costs, enabling deployment on less powerful hardware. It's similar to model compression and can be contrasted with techniques like pruning that remove portions of a network to reduce size.

Search Perplexity | Ask ChatGPT | Ask Clade

a

b

c

d

e

f

g

h

i

j

k

l

m

n

o

p

q

r

s

t

u

v

w

z