AI Engineering Glossary
Search
view all

Guardrails

Guardrails in the context of artificial intelligence refer to guidelines or constraints put in place to ensure models perform ethically, safely, and as intended. They help in preventing undesired or harmful outputs by incorporating limits to what models can generate or how they operate. For instance, a guardrail might restrict a language model from generating hate speech or misleading information, ensuring that its responses are safe and trustworthy.

Search Perplexity | Ask ChatGPT | Ask Clade

a

b

c

d

e

f

g

h

i

j

k

l

m

n

o

p

q

r

s

t

u

v

w

z