Skip to Main Content

Generative AI: Artificial Intelligence - Large Language Models

A guide to generative AI (artificial intelligence) and its impact on higher education.

Glossary of Terms

Generative Pre-training Transformer (GPT)

GPT stands for "Generative Pre-training Transformer." It is a large-scale language generation model developed by OpenAI. GPT-3 is the latest version of GPT.

GPT is trained on a massive dataset of text, and it can generate human-like text that is capable of answering questions, writing essays, composing poetry, and even coding, etc. It's capable of understanding and using the context of the text to generate text that is semantically and grammatically correct.

GPT uses a transformer architecture, a type of neural network particularly well-suited to language processing tasks. The transformer architecture allows GPT to handle input sequences of variable length and to take into account the context of the text, which is important for generating coherent and meaningful text.

In simple terms, GPT is a powerful AI model that can generate human-like text; it's trained on a very large dataset of text and uses transformer architecture, allowing it to take into account the context of the text, which makes it better at understanding and generating coherent and meaningful text.

Transfer Learning

Transfer learning is a technique in machine learning that allows a model trained on one task to be used as the starting point for a model on a different but related task. The idea is that the knowledge gained from solving one problem can be used to improve the ability to solve another related problem.

For example, a model trained to recognize images of animals could be used as the starting point for a model that is trained to recognize images of a specific type of animal, such as dogs. The knowledge learned about image recognition from the first model could be transferred to the second model, allowing it to learn to recognize dogs more quickly and accurately than if it had been trained from scratch.

Transfer learning is particularly useful in situations where there is a shortage of training data for a particular task. By starting with a pre-trained model, the model can still perform well on the target task without needing as much training data.

In simple terms, transfer learning is the process of using a pre-trained model as a starting point to train a new model on a different task. By doing so, the new model benefits from the knowledge learned by the original model and can perform better with less training data.

Transformer, With respect AI

A transformer is a type of neural network architecture primarily used for natural language processing tasks, such as language translation, language generation, and text summarization. The transformer architecture was introduced in a 2017 paper by Google researchers, "Attention Is All You Need."

The transformer architecture is called so because it uses self-attention mechanisms, which allows the model to weigh the importance of different parts of the input when making predictions. This allows the model to take into account the context of the input, which is important for understanding the meaning of the text.

The transformer architecture is composed of an encoder and a decoder. The encoder takes in the input and converts it into a set of "keys" and "values," which are then passed through a series of self-attention mechanisms to generate a set of "context vectors." These context vectors are then passed to the decoder, which generates the output.

One of the key advantages of the transformer architecture is that it can handle input sequences of variable length and it can process the entire input sequence in parallel, which makes it more efficient than traditional recurrent neural networks (RNNs), which process the input sequentially.

In simple terms, a transformer is a type of neural network architecture used primarily in natural language processing tasks; it uses self-attention mechanisms which allows the model to weigh the importance of different parts of the input when making predictions, allowing the model to take into account the context of the input which makes it better at understanding the meaning of the text.

Accessibility Statement
The University provides reasonable accommodations to individuals with disabilities upon request. Any person with a disability who needs accommodations for a workshop should contact UMA Libraries at uma.library@maine.edu to submit a request. Due to the lead-time needed to arrange certain accommodations, individuals should submit their request no later 1 week before the event.