Generative AI for Everyone | Notes | Week 1

Recently I finished the Andrew Ng’s course on Coursera – Generative AI for Everyone. These are my notes for week 1 from that course.

Generative AI is defined as artificial intelligence systems that can generate high quality content like images, text, audio and video. Through chatGPT, we have seen it can produce text. Adobe has AI in it’s tool, using which we can create images using prompts. 

Andrew makes a point in one of the videos that AI technology is general purpose. Just like electricity is used to power many things, AI can be applied to various problems. We already see use of AI applications in day to day lives like spam prediction, recommendations on Amazon/netflix, chatGPT, etc. 

Some applications for generative ai – 

  1. Writing – As LLMs work by predicting next words and sentences, it can be used to write something for you. Example, you can ask LLMs to write your linkedin post/blog post for you. LLMs are also used for translating from one language to another. 
  2. Reading – LLMs can also read long texts and create smaller inference from it. For example, you can ask LLM to go through your resume and create a linkedin summary for you. 
  3. Chatting – LLMs can be build to use specialized chatbots for an organization as per the requirements. 

Limitations of LLMs – 

  1. Knowledge cutoff – LLM’s knowledge is confined to the data it was used to train it’s model.
  2. Hallucinations – LLMs can make stuff in very confident and authoritative.
  3. Input length (prompt length/context length) and output is limited.
  4. It doesn’t work well with tabular (structured) data.
  5. LLMs can reflect the bias of the data it was trained on.

Thanks for stopping by! 


One response to “Generative AI for Everyone | Notes | Week 1”

  1. Could please add some clarity to how LLMs can cause hallucinations? With an example if possible.