- Bot Eat Brain
- Posts
- Quickly make your AI image high-res
Quickly make your AI image high-res
PLUS: What if the average Joe governed AI?
TOGETHER WITH
Good morning, human brains. Welcome back to your daily munch of AI news.
Here’s what’s on the menu today:
Quickly make your AI images higher resolution 📸 🖼️
Midjourney’s new upscaling features enhance your image’s resolution.
Can AI decode your brain’s visual data? 🧠 🛜 🌇
Meta’s new AI system generates images from brain scans.
What if the average Joe governed AI? 🤷♂️ 🤖 🤷♀️
Anthropic got 1,000 people to draft an AI model’s constitution.
MAIN COURSE
Create high-resolution AI close-ups 📸 🖼️
In August, we reported on Midjourney’s “Vary Region” feature. It’s an inpainting tool that regenerates areas of upscaled images.
In July, we covered Midjourney’s “Pan” feature. It’s an outpainting tool that lets you quickly extend your image vertically and side-to-side.
Let me guess, they released another feature.
Yes. On Wednesday, Midjourney announced 2x and 4x upscaling features. They enhance the detail of your AI-generated images.
What’s so special about it?
You can increase your generated image’s resolution and retain its original artistic style.
How do I use it?
All you have to do is take your newly generated image and click the “Upscale 2x” or “Upscale 4x” button. Then, AI increases your image’s resolution and detail.
Here’s a before and after:
The left image is the original image.
The right image is with Upscale 2x enabled.
FROM OUR PARTNERS
Turn your sketch into a professional design 🖌️ 📐
No design experience necessary.
Uizard is the world's easiest-to-use design and ideation tool - powered by AI.
Never leave an idea behind... Generate mockups from text prompts, scan screenshots of apps or websites, and drag and drop UI components to bring your vision to life.
Uizard makes UI design easy for anyone.
👉️ Generate landing pages, wireframes, and applications in minutes.
👉 Collaborate in real-time with your team.
👉 Built for both professionals and beginners.
It’s the ultimate productivity hack, allowing you to move projects forward without waiting months for design resources.
What would you like to design?
BUZZWORD OF THE DAY
Encoder
An encoder takes an input signal and encodes it into a format suitable for transmission or storage, while a decoder takes an encoded input signal and decodes it into the original format.
SIDE SALAD
Meta extracts images from brain scans 🧠 🛜 🌇
How does it work?
It uses magnetoencephalography (MEG) to decode visual representations in the brain in real time.
Magna, what?
Magnetoencephalography measures brain activity by detecting magnetic fields produced by the brain.
So it’s a brain implant?
It’s a non-invasive neuroimaging technique that doesn’t require surgical procedures, injections, or body penetration in any way.
Meta’s approach uses an image encoder, a brain encoder, and an image decoder.
The image encoder analyzes the image, the brain encoder aligns brain signals to the image, and an image decoder recreates the image from brain signals.
Want to learn more about the potential of mind-reading AI? Start with these previous editions of Bot Eat Brain:
A LITTLE SOMETHING EXTRA
AI governed by average people? 🤷♂️ 🤖 🤷♀️
Last month, we covered Anthropic’s new partnership with Amazon. It’s the AI company that developed Claude.
On Tuesday, it announced its Collective Constitutional AI. It’s a way to make AI follow certain rules written in a “constitution” or a set of principles.
What did they do?
They got about 1,000 Americans to help draft a constitution for an AI system. Before this, the AI’s constitution was written by Anthropic.
Did they get them all in a room?
No, they used an online platform called Polis to gather the public’s thoughts on AI.
Did anarchy ensue?
Actually, the public’s suggested rules had about 50% overlap with Anthropic’s original constitution.
Surely, Anthropic’s constitution was better. Right?
Nope. They trained one AI model with the public’s constitution and one with Anthropic’s.
The models performed similarly on most tasks, but the Public model showed less bias, especially regarding disability status and physical appearance.
MEMES FOR DESSERT
YOUR DAILY MUNCH
Think Pieces
Is OpenAI really open? Stanford, MIT, and Princeton released an index that rates the transparency of the top AI models.
How AI teaches us about ourselves. A look at what differentiates how humans and LLMs “understand” things.
What happens when AI “listens” to a forest? How researchers use it to track biodiversity in nature.
Startup News
OpenAI partnered with Dhabi’s top AI company, G42. The goal is to integrate AI into banks, hospitals, and more.
Adept open-sourced Fuyu-8B. Its LLM is designed for digital agents to see pictures and read text.
Universal Music sues Anthropic. It claims it distributes licensed song lyrics without permission.
Research
Mini-GPTv2 — a vision-language model that tackles image description, visual question answering, and more.
4K4D — a method for achieving high-fidelity, real-time rendering of dynamic 3D scenes at 4K resolution.
Zipformer — a revamped transformer for automatic speech recognition that’s faster, uses less memory, and outperforms the widely-used Conformer.
Tools
Martin — a conversation voice assistant powered by AI.
Knibble — a tool to create chatbots and AI knowledge bases.
Narrato — an AI-powered marketing and content creation tool.
Browserbear — a no-code AI-powered web scraping tool.
RECOMMENDED READING
If you like Bot Eat Brain there’s a good chance you’ll like this newsletter too:
👨 The Average Joe — Market insights, trends, and analysis to help you become a better investor. We like their easy-to-read articles that cut right to the meaty bits.
TWEET OF THE DAY
Stability AI’s CEO tweets an adorable AI-generated image with the prompt used to make it. Spoiler alert: it rhymes. More on Stability, here.
Tag us on Twitter @BotEatBrain for a chance to be featured here tomorrow.
AI ART-SHOW
Until next time 🤖😋🧠
What'd you think of today's newsletter? |