- Superslow AI Newsletter
- Posts
- Superfast AI 1/31/23
Superfast AI 1/31/23
NVIDIA's eye-masking technology, RLHF vs RLAIF in Anthropic's Constitutional AI, and Andrew Ng's dive on Sparrow.
Hey everyone! Today we’ll dive into NVIDIA's eye-masking technology, RLHF vs RLAIF in Anthropic's Constitutional AI, and Andrew Ng's dive on Sparrow.
Let’s dive in!
🗞 News
Teleprompter? What teleprompter?
NVIDIA developed a video editor called Broadcast 1.4 that masks your eyes to make it look like you’re making constant eye contact.
The digital mask simulates your natural eye color and cadence of blinks
If the user looks too far off screen, turns away from the camera, or covers one eye, the mask smoothly transitions between the simulated gaze (towards the camera) and the user’s real gaze (off camera)
This tech is likely to be used by content creators (especially YouTubers and Twitch streamers) and on video conferencing applications (like Zoom)
Check out a demo here:
👁️👁️
I'm not looking at you.
Amazing new machine-learning technology from @nvidia called Eye Contact.
As an autistic guy I wish I had this in real-life.I'm testing it now LIVE on Twitch.tv/1030
Congrats @gerdelgado and team.— Twitch.tv/1030 (@1030)
1:36 PM • Jan 17, 2023
AI commercialization vs research
There’s a tension between commercialization and research in AI development. As Jeremy Kahn of Fortune Magazine reports:
Doesn’t the partnership with Microsoft allow OpenAI to grow rapidly without having to invest much in sales or marketing team? Well, yes—but maybe at a significant cost in other respects. Former OpenAI employees I spoke to have said that even Microsoft’s initial investment back in 2019 pushed the company to be more focused creating commercial products. It helped cement a focus on large language models to the detriment of other research avenues. Now, that’s fine if you think LLMs are the path to AGI. But, if they are not—and quite a lot of people think they are at best only part of what’s needed—then it’s quite possible OpenAI gets distracted trying to build products for Microsoft and misses out on the next big A.I. breakthrough. Microsoft might not really care if OpenAI ever achieves AGI, as long as what OpenAI produces is commercially useful to Microsoft. But for OpenAI, that outcome would be a failure.
What do you think? Will there be trade offs between producing useful products and progressing AI development through research? And if so, how do AGI timelines affect that trade off?
📚 Concepts & Learning
How do you train an AI model?
Last Sunday, Percy Liang, an AI researcher at Stanford, wrote:
I worry about language models being trained on test sets. Recently, we emailed [email protected] to opt out of having our (test) data be used to improve models. This isn't enough though: others running evals could still inadvertently contribute those test sets to training.
— Percy Liang (@percyliang)
7:12 AM • Jan 29, 2023
Liang’s concerns are important because of how AI models are trained. Let’s breakdown quickly how models are trained, and why Liang’s comments are interesting:
You have a labeled dataset. Let’s say its 100 arithmetic problems. You have both the questions and the answers
Question: What is 2+2?
Answer: 4
You train your models on a portion of the dataset (let’s say 75%)
How? You ask the model to give the correct answer to an unlabeled question. It’s just like Math Olympiad: give the model a worksheet of math questions and ask it to find the answers. But you only do this for a portion of the fully labeled dataset.
You then show the model the answer and get it to update
Just like school, you can given the model a grade on their math worksheet, and show it all the correct answers.
How does it update? The short of it is: the model figures out how far off it was from the correct answer (4), and it readjusts its own parameter weights through something called back-propagation.
But updating does not means the model makes a perfect change! It may simply adjust its own parameter weights in the right direction. So you run the model through step 2-3 multiple times, sometimes on the exact same dataset.
As a quick aside, the relationship between the input and correct output can be quite abstract. For example, you might have a dataset of good marketing copy. The unlabeled question might be “write me a catchy marketing tweet about blue headphones” and the correct output could be a wide variety of answers. The goal of training is to move your model towards the correct output without outlining an explicit set of rules that makes certain writing better than others.
You then evaluate your model
Once your model performs well on the training dataset (75 questions in this case), you evaluate your model performance on the test set (which is the subject of Liang’s tweet). In this case, the test set is the 25% of the dataset you didn’t train your model on.
It’s important that you reserve a test set of questions that you’ve never trained or shown to your model so that your evaluation measure is informative. It’s more informative to know if the model has truly learned the concept when tested on novel questions, than on questions its seen before.
Once again, it’s pretty similar to school — train your students on practice problems and test them later on novel problems they’ve never seen before.
RLHF vs RLAIF
In the Constitutional AI paper by the team at Anthropic, researchers used two methods for model training: RL from Human Feedback vs RL from AI Feedback.
The first uses human-labeled data to train a model to produce more aligned or accurate results.
The second uses data created by another language model to produce aligned and accurate results.
What’s the use case?
Human-labeled training sets are great for when you want to align your models with human preferences. However, it’s expensive and there may be disagreement amongst your labelers. (I personally think that labeler disagreement is good and that you want models to be trained on a variety of perspectives, ethics, norms and beliefs). Another concern is if AI becomes super-intelligent (more intelligent than humans), humans may lack the ability to supervise the accuracy and alignment of those models.
RLAIF is a useful method to test whether model supervision is possible via other aligned models. At a high-level, a preference model can be trained on a set of rules (a Constitution), and its job is to evaluate if other models produce outputs that follow those rules. Think of the preference model as a specialist. A specialist can learn all the rules of their domain, and advice the generally capable model on the best action. This is a fascinating insight because it means that researchers can use several simple/specialist models to evaluate larger models. This will become increasingly important as large models become more sophisticated.
DeepMind’s Sparrow
Andrew Ng writes a great piece in The Batch about DeepMind’s Sparrow (aka ChatGPT). Here’s a snippet:
How it works: Sparrow started with the 70 billion-parameter pretrained Chinchilla language model. The authors primed it for conversation by describing its function (“Sparrow . . . will do its best to answer User’s questions”), manner (“respectful, polite, and inclusive”), and capabilities (“Sparrow can use Google to get external knowledge if needed”), followed by an example conversation.
Results: Annotators rated Sparrow’s dialogue continuations as both plausible and supported by evidence 78 percent of the time; the baseline Chinchilla achieved 61 percent. The model broke rules during 8 percent of conversations in which annotators tried to make it break a rule. The baseline broke rules 20 percent of the time.
Well worth reading the full piece here.
🎁 Miscellaneous
Osmo is a Google research spinout that wants to leverage AI to discover new smells.
A few interesting applications:
identify novel smells for beauty products (perfumes, lotions, candles)
digitally reproduce smells to enhance entertainment experiences (movies, video games, VR) — this is an interesting hardware application
enhance early detection of illnesses
detect food spoilage early
equip cities with the tools they need for urban development (toxic leak identification, pandemic preparedness, and more)
Cool demo of deep fake applications
This is mind blowing technology. Generative AI will completely change how films are made.
From: @Flawlessai
— Lior⚡ (@AlphaSignalAI)
11:21 PM • Jan 24, 2023
Same, dude… same
That’s it! Have a great week and see you next week! 👋
Thanks for reading Superfast AI. If you enjoyed this post, feel free to share it with any AI-curious friends. Cheers!