Uploaded on Jan 18, 2021
PPT on Google new trillion-parameter AI language model.
Google new trillion-parameter AI language model.
GOOGLE’S NEW
TRILLION-
PARAMETER AI
LANGUAGE
MODEL
• Google Brain has developed an art ifi cial intel l igence
language model with some 1.6 tri l l ion parameters. This is
nine t imes the size of OpenAI's 175 bi l l ion GPT-3
parameter.
INTRODUCTION
Source: aibusiness.com
• Language models can accompl ish a range of ro les, but the
generat ion of novel text may be the most common ones.
• To try to fi x whatever you ask i t , you must contact a language
prototype of «the phi losopher AI», i .e . you may come here and
discuss i t (with numerous notable except ions).
BACKGROUND
Source: thenextweb.com
• While these amazing AI models ex ist at the top stage of
machine learning technology, i t is v i ta l to note that they are
u lt imately merely t r icks in the analysis .
• Those appl icat ions don't understand voice, they're just r ight -
tuned to make i t seem l ike they do.
MACHINE
LEARNING
TECHNOLOGY
Source: aibusiness.com
• The Brain group has learnt that the model i tsel f can be
made as transparent as possible by consuming the brush
of electrical design force you may obtain a larger amount
of measuring parameters possible.
WHAT GOOGLE
HAS DONE?
Source: aibusiness.com
• Google’s 1.6-tri l l ion-parameter model, which appears to
be the largest of i ts size to date, achieved an up to 4
t imes speedup over the previously largest Google-
developed language model (T5-XXL).
GOOGLE-
DEVELOPED
LANGUAGE
MODEL
Source: venturebeat.com
• Large-scale preparat ion i s an important t ra i l for good models . Easy
arch i tectures backed up by mass ive data sets and numerous code
numbers are way around many more advanced too ls .
• Eff ect ive, large-sca le tra in ing i s , however , h ighly intens ive in
computat ions .
LARGE-SCALE
TRAINING
Source: venturebeat.com
• The swi tch t ransformer , an A I model parad igm fi rst in t roduced in the
ear ly 1990s , re l ies on a d ivers i ty o f exper ts .
• The cha l leng ing pr inc ip le is to re ta in numerous pract i t ioners , and
ideas engaged in separate ly ro les , wi th in an expanded pro jec t and
make a compact network se lect wh ich profess iona ls to requested
in format ion .
SWITCH
TRANSFORMER
Source: venturebeat.com
• The researchers claim their 1.6-tr i l l ion-parameter model
with 2,048 experts (Switch-C) exhibited no training
instabi l i ty at al l , in contrast to a smal ler model (Switch-
XXL) containing 395 bi l l ion parameters and 64 experts.
MODEL
COMPARISION
Source: venturebeat.com
• Several of the most common models have observed elevated
rates of theoret ical bias, including Google’s BERT and XLNet,
OpenAI’s GPT-2, and Facebook’s RoBERTa.
• Malicious actors might exploit this prejudice to st imulate
divis ion by transmitt ing mistakes, weaknesses and cruel bias.
STEREOTYPICAL
BIAS
Source: venturebeat.com
• Google's pol icy for reported machine learning may have
played a role here.
• Researchers at the company are now expected to consult
with legal, pol icy, and public relations teams before
pursuing topics such as face and sentiment analysis and
categorizations of race, gender, or polit ical affi liation.
GOOGLE'S
POLICY
Source: venturebeat.com
Comments