Skip to Main Content

Introduction to Gen AI

What is Generative AI?

Generative AI is a technology that can produce content such as text, images, music, videos, and even code, based on input or prompts. However, it’s important to understand that generative AI doesn’t create in the traditional sense—it generates.

While creativity involves producing something entirely new through imagination and skill, generative AI works by generating outputs based on patterns and existing data.

For instance, AI text generators predict the next word in a sequence based on the patterns they've learned, but they don't "understand" the content as humans do.  Although AI can replicate certain elements of human intelligence and even excel in specific tasks, it lacks consciousness, reasoning, and true comprehension.

Additionally, AI systems often reflect the biases embedded in their training data. If the data contains biases, the AI is likely to reproduce these biases in its generated outputs, highlighting the importance of carefully selecting and curating training data.

Terminology

AI is a complex field with many technical terms and concepts. Familiarity with these terms can enhance your understanding of AI and its potential applications in your work.

Artificial Intelligence (AI): AI refers to systems designed to carry out tasks that usually require human intelligence. These systems can learn from experience, identify patterns, make decisions, and solve problems. Essentially, AI involves creating machines that can think, learn, and act like humans, enabling them to understand and make independent decisions.

Generative AI: This form of AI can generate new content, such as text, images, audio, video, and even code. Trained on large datasets, generative AI produces new outputs by recognizing patterns in the data it has learned.

Machine Learning: is the way we "teach" a computer model to analyze data, make predictions, and form conclusions.

Natural Language Processing (NLP): NLP is a branc of AI focused on developing software that comprehends interprets, and responds to human language, enabling tools like virtual assistants, chatbots, translation services, and speech synthesis.

Neural Networks: Modeled after the human brain, neural networks are a form of AI used in tasks like image and speech recognition, as well as language processing.

Large Language Models (LLM): These AI models, trained on vast amounts of text, can generate new text that mimics the style and tone of the original data. They are used in tasks like translation, summarization, and content generation. ChatGPT is a notable example of an LLM.

Hallucinations: In AI, "hallucinations" refer to errors where the system generates information that sounds plausible but is factually incorrect. These errors can arise from improper training or exposure to incomplete or biased data.

OpenAI: OpenAI is the organization behind ChatGPT, the large language model that is featured throughout this course. It was founded by Sam Altman.

Prompt Engineering: This involves crafting clear and precise prompts to guide AI models in generating accurate and relevant responses. The goal is to ensure the AI’s output aligns with specific objectives.