Cutting-edge AI: DECEMBER Digest

Your selection of research papers in the most critical field in the history of humanity

This month brought us new model training tech and language model enhancements. While news articles and communities are hijacked by the hot topic of ChatGPT, we will dive deeper into SOTA advances in language models that outline trends to fully develop in the year 2023 and later.

Amount of AI papers issued every month keeps growing; going further, we will switch away from monthly digests and focus on research related to certain topics, bundling together papers released over a certain period of time into bite-sized articles, meaning it's the last monthly digest you'll see (which is for the better?)


Some absolute madlads decided to try using diffusion to generate spectrograms - a visual format of audio data. They generated spectrograms for the music of various genres and even mixed them up.

...And it just works!

They published an article with several examples and a web app for people to experiment for themselves. Don't expect wonderful quality, but this adds to google's AudioLM to show that generative music is coming and it will be there earlier than the world is ready for it.

ARTICLE ↗Web app ↗

Nonparametric Masked LMs

NpM for short, this novel approach removes a certain inefficiency in language models, improving their performance on some tasks and improving their capability to understand or generate texts with words that rarely occur or don't exist in training datasets at all.

NpM models achieve the same performance with fewer parameters and enable better scaling, unlocking possibilities for the next language models to show even more fantastic results.

Paper ↗


PaLM and its latest modifications are is currently the most powerful language models in existence. Flan-PaLM was tuned to specialize in medical domains of knowledge.

The resulting network has an elevated performance on medicine-related tasks, including giving encouraging answers to a concerned customer's questions about certain diseases and numerous other use cases.

Paper ↗


This paper introduces another method of enhancing Language Models called Backward Chaining, improving their capabilities to solve multi-step problems. In certain cases, it outpaces an approach suggested in PaLM papers called Chain-of-Thought, or CoT.

Backward chaining is like trying to solve puzzles backward. For this to work, Lambada incorporates four new modules that improve reasoning: Fact Check, Rule Selection, Goal Decomposition, and Sign Agreement.

Paper ↗


Yes, another Language Model technique. The authors tried to target a constraint of training time and modified BERT, 2018's language model to get the best performance by training it for just one day on a single processing unit.

Paper ↗

CALM Accelleration

And yes, yet another language model technique. This Google research paper presents a "confidence" threshold for language models.

The approach allows models to cut down the search for the next item to generate once a good enough option has been found. This improves time and computation efficiency on generation, which stacks up massively when large amounts of text are generated.

ARTICLE ↗Paper ↗

Data2Vec 2.0

There is a way of training AIs without labeling data and letting the AI observe the data freely to understand it by itself. This is called self-supervised learning, and the approach has been utilized in many recent works.

Earlier in 2022, Meta proposed an algorithm Data2Vec that facilitates self-supervised learning across several "modalities" - speech, vision, and text.

The release of Data2Vec 2.0 provides up to a 16x increase in pre-training efficiency.

Article ↗Paper ↗


Another game mastered by AI - this time it is stratego. It resembles chess, but most importantly has a "fog of war" - enemy pieces are unknown unless directly observed from a nearby position.

The game complexity of chess is 10123, for another recently mastered game "go" it is 10360, and stratego it is 10535. The underlying algorithm DeepNash analyses possibilities for the unknown part of the enemy setup, and utilizes deceptive behaviour and bluffing to achieve a 97%+ win rate against all existing stratego bots.

ARTICLE ↗Paper ↗


OpenAI creates a generative network for another type of 3D object data made of colored points in 3D space, called point clouds.

You may wonder where this type of model is used. The answer is - it is the output of 3d scanners, is utilized for manufacturing, and in many cases serves as a raw, unprocessed 3D model format. The model is also capable of converting 2D images to point clouds.

Paper ↗


Robotics meet generalization.

You might remember gato - the most powerful generalist agent to date. While RT-1 doesn't beat this status, it achieves improved performance specifically on tasks related to robotics, mainly including various ways of robotic arm manipulation.

ARTICLE ↗Paper ↗


Leading meteorology researchers use massive systems to predict the weather; infrastructure all over the world depends on these predictions, cutting down repair spending and ensuring longer-lasting designs where necessary.

Now there is an algorithm that beats all existing medium-range systems.

The new algorithm comes with better data resolution, higher accuracy of prediction, and data efficiency.

ARTICLE ↗Paper ↗

Closing words

Thank you for reading this. Here is your minimalistic badge of nothing. Just thought it looked nice. A very nice nothing. What a nice not-a-thing to not-a-have. Ok to be honest we're abolishing the badges thing. Who thought it was a good idea, to begin with? Collect badges that are different at the end of each article? What's the reader supposed to do, download and store them in a special folder on their hard drive?

See you in the next non-monthly article. Rephrase the news for 1-year-olds, then tell your garden plants, the bottles of pesticides you sprinkle them with, and your gardener, of course. Increase AI awareness. Spread the word.

Upvote the post on reddit ↗
Got it! Until next month!
Wow! Submit failed. Something is broken. You can try again later, but likely you'll just go away.

I will miss you.