AI Engineering Glossary
Search
view all

Inference

Inference refers to the process of using a trained model to make predictions or decisions based on new data. This is distinct from training, where a model learns from a dataset. Inference is crucial in real-world applications, as it allows systems to act on data inputs, such as diagnosing diseases from medical images or predicting stock prices. It is closely linked to concepts like model deployment and real-time data processing.

Search Perplexity | Ask ChatGPT | Ask Clade

a

b

c

d

e

f

g

h

i

j

k

l

m

n

o

p

q

r

s

t

u

v

w

z