A new Chinese vibe-coding tool exploded in popularity last week, so of course, I had to test it.
LingGuang, an AI app for building apps using plain-language prompts, launched on November 18. By Monday, it had racked up over 2 million downloads.
Chinese tech group Ant Group, which built the tool, said the surge of users briefly crashed the app’s flash program feature.
To see what the hype was about, I took LingGuang for a spin — and stacked it against OpenAI’s ChatGPT.
The AGI camera stole the show
I logged in with my Alibaba account (Ant Group is an affiliate company of the Chinese conglomerate Alibaba Group) and landed on a moving mountain landscape paired with a Chinese tagline: “Let the complex be simple.”
Compared with ChatGPT’s plain backdrop, LingGuang looked like it was beamed in from 2030.
LingGuang offers a feature that caught my eye: an artificial general intelligence camera. Ant Group said it can understand scenes in real time and help users analyze or edit what they’re looking at without uploading a photo.
I first tested it at work, with wild results. I pointed my phone camera at a startup founder speaking in a podcast video clip, and LingGuang instantly recognized him and named the company he started.
I took it to my local supermarket to see what else it could do.
I was hunting for a post-workout protein smoothie, and I pointed the AGI camera at three brands on the shelf. The app immediately identified the English-labeled products and surfaced essential information, including protein levels, flavor, whether it contained sweetener, and what it was suitable for. The information checked out, although I needed to make sure the camera had a clear shot of the product.
Lee Chong Ming/LingGuang
To determine which one was the smartest buy, I activated voice mode and asked in Chinese. LingGuang compared protein, brand speciality, and price, pulling data from the image and the web. Then it gave recommendations: most nutritious, best value, and a lactose-free pick.
I tried the same thing with ChatGPT. Because it can’t analyze scenes in real time, I took a photo of the shakes and uploaded it manually — a process that felt outdated after using LingGuang.
ChatGPT’s comparison was detailed and on par with LingGuang’s, but the experience lacked the immediacy and visual cues that made LingGuang feel seamless.
One user interface difference also stood out. When LingGuang captures an image, it surfaces tappable prompt bubbles that guide you through the next steps.
Lee Chong Ming/LingGuang
ChatGPT suggests prompts as well, but they sit below the chatbox and still require typing. LingGuang felt like an AR companion, while ChatGPT felt like, well, chat.
The Chinese app had one drawback: Nothing from the AGI session saves. I couldn’t revisit any photos or responses afterward, which makes it hard to reference anything later. ChatGPT saves every uploaded image in the chat, something I rely on.
Generating videos on the fly
LingGuang also offers something ChatGPT doesn’t: on-the-fly video and image generation directly from its AGI camera.
Users can snap a photo, tap into the edit tab, and turn the image into a video or edit it with prompts.
I snapped a photo of my Labubu on the AGI camera and asked LingGuang to make it smile and dance.
Twenty seconds later, it spat out a clip, including a cute soundtrack, of my Labubu grinning and flapping around like a tiny bat, synced to the movement of my hand in the frame.
ChatGPT has no equivalent feature. To animate an image, I had to switch to Sora, upload a photo I took of Hong Kong’s harbor, and ask it to “bring it to life.” The result was stunning and a little dramatic.
LingGuang handled the same image differently. Its output was strong, with softer waves and a more realistic feel — almost as if I were on a boat.
Lee Chong Ming/Sora; LingGuang
Visual style comes down to personal preference, but LingGuang allows me to capture, edit, and generate a video in a single, continuous workflow. On user experience, it wins.
I built a flash app in a minute
LingGuang’s flash app feature — the one that crashed from overuse — promised to build mini-apps in 30 seconds.
When I opened it, LingGuang suggested app ideas. One of them was a “meal decision” generator that works like a food lottery.
My friends and I regularly spend more time deciding what to eat than actually eating, so I tapped it. The screen started “thinking.” It wasn’t 30 seconds, but about a minute later, a fully formed mini-app appeared.
The instructions from the bot were clear: include the dish names, their origins, and a brief description of why they’re recommended. The flash app added food emojis and sound effects to mimic the drumroll-and-reveal vibe of a lottery. All I did was click a prompt. It felt like sorcery.
The generator recommended food like curry rice and Japanese ramen. Wanting to push the app further, I asked it to tailor the mini-app to food from Singapore, where I live.
Another minute later, it regenerated the entire interface and swapped in local dishes. One of the first picks: Katong laksa. Hyper-specific to where I live. Another: chilli crab. The classic tourist magnet. The flash app nailed the selection of my local cuisine.
Lee Chong Ming/LingGuang
I asked ChatGPT to create a flash app that could “help me choose what to eat on a daily basis.” It generated the full code, explained how to build it, and even suggested ways to customize it.
There was no instant app, but I appreciated having actual code to work with, something LingGuang never surfaced. LingGuang’s flash feature works for simple, everyday use cases. For anything more complex, I’d still turn to ChatGPT or other vibe-coding tools.






