Skip to content

Language Models

A language model is a Deep Learning Neural Network trained on a huge amount of text data to perform different types of language tasks. Commonly, they are also referred to as Large Language Models (LLM). Language models come in many architectures, size and specializations.
The peculiarity of the Cheshire Cat is to be model-agnostic. This means it supports many different language models.

By default, there are two classes of language models that tackle two different tasks.

Completion Model

This is the most known type of language models (see for examples ChatGPT, Cohere and many others). A completion model takes a string as input and generates a plausible answer by completion.

Warning

A LLM answer should not be accepted as-is, since LLM are subjected to hallucinations. Namely, their main goal is to generate plausible answers from the syntactical point of view. Thus, the provided answer could come from completely invented information.

Embedding Model

This type of model takes a string as input and returns a vector as output. This is known as an embedding. Namely, this is a condensed representation of the input content.
The output vector, indeed, embeds the semantic information of the input text.

Despite being non-human readable, the embedding comes with the advantage of living in a Euclidean geometrical space. The embedding can be seen as a point in a multidimensional space, thus, geometrical operations can be applied to it. For instance, measuring the distance between two points can inform us about the similarity between two sentences.

Language Models flow

Developer documentation

Language Models hooks

Nodes with the 🪝 point the execution places where there is an available hook to customize the execution pipeline.