Mishaal Rahman / Android Authority
As we approach the end of the year, the companies at the forefront of AI are rushing to get their latest and greatest models into the hands of consumers, developers, and businesses. Last week, OpenAI unveiled ChatGPT 5.1, a smarter model with more personality. Not to be outdone by its rival, Google has unveiled Gemini 3, the latest version of its AI model that the company says has improved reasoning, coding, and multimodal capabilities.
I spent the weekend testing Gemini 3 Pro, and I found these claims to be mostly true. Gemini 3 Pro accurately analyzed complex charts I gave it, threw together easy-to-understand slideshows from dry text documents, and even created bespoke Android apps based on a short description of my requirements.
Gemini 3 also introduces a small but incredibly useful new quality-of-life feature: the ability to proactively recommend follow-up questions. You might think this is useless, but that’s only if you’re researching a topic you already know. If you’re learning about something new, Gemini’s recommended follow-up questions serve as a great jumping-off point to drill deeper into complex subjects.
During my brief time with Gemini 3 Pro, I challenged it to answer 9 complex reasoning and coding questions. Here’s how it went.

Mishaal Rahman / Android Authority
Question #1: How Gemini 3 optimized my workout regimen
I’ve recently started taking my health more seriously, and as a result, I’m regularly hitting the gym. My younger brother, who started his own fitness journey a few months ago, helped me build a full-body workout regimen consisting of the following exercises:
- Iso-Lateral Row
- Chest Press
- Shoulder Press
- Lat Pulldown
- Bicep Curl
- Triceps Press
- Wrist Curl
- Face Pull
- Leg Extension
- Leg Curl
I knew this regimen was missing a few key exercises; my brother mentioned this, but he left town before showing me the rest. Out of curiosity, I decided to ask Gemini 3:
“Here is a list of the exercises I am currently doing at the gym. Can you tell me if there are any significant muscles or muscle groups I’m not hitting, and if so, what exercises I should add to my regiment?”
Because I had previously told Gemini to act as a skeptical expert who double-checks everything I say, it broke down the muscles I wasn’t hitting (glutes, lower back, abdomen, and calves) and the exercises I should add (leg press, back extensions, ab crunch, and calf raise) in great detail. It then proactively recommended a new workout structure so my sessions wouldn’t drag on for three hours.
The biggest surprise came at the end, when Gemini 3 asked: “Would you like me to help you reorganize the order of these exercises to maximize your energy levels during the workout?” With a simple “Yes, please!” Gemini reshuffled my routine so I would perform the most demanding exercises first.

Mishaal Rahman / Android Authority
Gemini then noted that with the new additions, I’d likely be too exhausted to complete the later exercises if I went to the gym four days a week. It suggested splitting my workout into an Upper/Lower schedule so I could keep sessions under an hour on a four-day plan.

Mishaal Rahman / Android Authority
As a gym novice, these follow-up questions were incredibly helpful. I doubt I would’ve thought to ask them without Gemini’s prompting. I’m looking forward to seeing how else Gemini can guide my learning!
Question #2: How Gemini 3 fixed this new OnePlus 15 feature
The new OnePlus 15 just launched with OnePlus’ latest OxygenOS 16 release out of the box. Although OxygenOS 16 is available on other OnePlus devices, the OnePlus 15 is the only phone to receive the company’s new Motion Cues feature — basically a clone of iOS 18’s Vehicle Motion Cues. Regardless of the inspiration, I’m glad OnePlus added it; it’s a lifesaver for anyone who suffers from car sickness. Unfortunately, there’s a catch: OnePlus made it a pain to access.
To enable Motion Cues on the OnePlus 15, you have to navigate to Settings > Accessibility & convenience > Motion cues and then tap the switch. You have to do this every time you want to use the feature, which is annoying. I can’t fathom why OnePlus didn’t add a simple Quick Setting tile for this, so I asked Gemini 3 to make one for me.
Admittedly, I had to do a bit of my own research to figure out how to programmatically toggle Motion Cues, but I left the actual app creation to Gemini. I even asked it to use the niche Android 13 API for requesting a Quick Setting tile placement and to integrate the Shizuku library, allowing me to grant the app permission to toggle the setting without connecting my phone to a PC.
Apart from two tiny hiccups — a missing import statement and an unresolved color value — the app worked flawlessly. Gemini 3 created a simple Android app from scratch without much back-and-forth, and that is awesome. It even improved upon Gemini 2.5 Pro’s coding capabilities by not hallucinating methods from the Shizuku library, an annoying issue I encountered with a past project.
Question #3: Testing new experimental Android features with Gemini
Late last month, I discovered that Google is working on a new “EyeDropper” app for Android 17 that helps users pick a color from an image. The “EyeDropper” app provides an API for other apps to invoke, meaning it is intended for use only within specific apps, not system-wide. However, given that the EyeDropper API is open, one of my followers asked if it would be possible to build a Quick Setting tile that invokes the color picker on any screen. With the help of Gemini 3, I quickly created such a tile.
The only issue I encountered was that Gemini’s code used two methods that were deprecated in the version of Android the app targeted. Once I pointed this out, Gemini corrected the code, and the app worked as expected.
Now, I’m sure experienced developers may need to provide more guidance to get Gemini to output the correct code for complex codebases. But its ability to produce mostly working code for simple, one-off projects is impressive and will be useful to anyone looking to solve specific problems.
Questions #4-5: Using Gemini to transform complex documents or raw data into engaging presentations
For my fourth question, I asked Gemini 3 Pro to “create a slideshow that breaks down the proposed settlement terms from the Epic v. Google trial,” referencing the lengthy court document we covered last week. That document consisted of 33 pages of dense text, making it difficult to parse.
While the slideshow was visually appealing, it had flaws. First, it blended outdated information from the web with new details from the court document, oddly mentioning changes set to be implemented “in late 2024.” Second, its image selection was poor. One image was in the right direction (a permission dialog on a slide about permission changes) but should have been the actual “single store install screen” shown in the document. Another image was entirely irrelevant, displaying a generic iOS app UI kit.

Mishaal Rahman / Android Authority
Gemini 3 fared better with the second presentation I tasked it to make, perhaps because I grounded it with extra context. Specifically, I asked it to visualize our data regarding the Pixel 10’s issues with older Qi chargers. Everything was accurate, and the slides were helpful. Not to knock the wonderful work of my colleagues, but this presentation demonstrates the power of visuals in data analysis — as well as how easy it is to generate those visuals using Gemini.

Mishaal Rahman / Android Authority
Despite the factual and visual hiccups in the first test, I would still use Gemini 3 to craft future presentations as it serves as an excellent starting point. If it continues to generate drafts that require only minor tweaks, I can see myself using it extensively.
Questions #6-7: Getting Gemini to analyze complex charts
Switching gears from creation to analysis, I tested Gemini 3’s ability to interpret complex charts. Specifically, I fed it two charts from my recent article on “risk-based security updates” and why Google wants to switch Android to them. I also asked it to outline the pros and cons of Google’s “longevity GRF” program, using only two charts I created as context.
In both instances, Gemini accurately interpreted the data, effectively generating a detailed summary of my original reporting.
If you need to parse a complex chart from a financial report or technical document, Gemini 3 could be pretty helpful. While blindly trusting an AI chatbot remains risky, Gemini 3 Pro’s multimodal capabilities — combined with its ability to reason and retrieve web information — help improve accuracy and mitigate hallucinations.
Questions #8-9: r/TheyDidTheMath? More like Gemini did the math.
Reddit’s r/TheyDidTheMath is a fun community where math nerds answer hypothetical (and often silly) questions. If I came across a hypothetical that wasn’t already answered by those nerds, could I rely on Gemini to crunch the numbers instead? To find out, I asked Gemini to answer two questions already solved by the community:
- The Watermelon Explosion: If a watermelon is behind a thick steel wall with a 1mm gap during an explosion, will it get sliced in half? Gemini’s Answer: No. Due to diffraction, the force would expand after exiting the gap, causing the watermelon to explode rather than slice. This aligns with a top Reddit comment, though other users argued that if the melon were close enough, the shockwave would act as an “air knife” (albeit a messy one).
- The Bitcoin Lottery: How many times would one have to win the lottery to have the same odds as guessing the password to Satoshi Nakamoto’s Bitcoin wallet? Gemini’s Answer: Its calculation matched the top Reddit comment perfectly regarding the number of unique seed phrases required to access the wallet.
So, would I turn to Gemini to solve a hypothetical math or physics problem? Probably! But I would be hesitant to present that answer as fact without verification. Then again, that hasn’t stopped people on social media from being confidently wrong, so perhaps AI will just help them get there faster.
I’ve only spent a brief time with Gemini 3 Pro, but I’m already impressed. Google wasn’t exaggerating about the improved reasoning and coding capabilities. While we expect newer models to outperform their predecessors, recent AI advancements have felt increasingly incremental. Gemini 3 Pro may not be a generational leap over 2.5 Pro, but that is largely because the previous model was already exceptional. I look forward to integrating Gemini 3 into more of my life, as its predecessor has already significantly enhanced my workflow.
Thank you for being part of our community. Read our Comment Policy before posting.