Problem Statement
Choosing a restaurant shouldn't require reading 50 reviews. I wanted a tool that answers three questions instantly:
- What's good nearby?
- What do people actually say about the food?
- Can I park there?
The Original Build (3 Days)
The first version stitched together three Google Cloud APIs:
- Google Places API — find restaurants within a radius, fetch ratings and reviews
- Natural Language API — sentiment analysis on review text, entity extraction for food items
- Keyword matching — scan reviews for parking-related terms (valet, parking lot, street parking)
Three full days of work: API authentication, pagination handling, rate limiting, response parsing, error handling, and result ranking.
The Rebuild (3 Minutes)
Months later, I revisited the same idea using Google AI Studio. With a few structured prompts, I had a working version that produced comparable results — no backend setup, no API juggling.
What Changed
| Aspect | Original (API-based) | Rebuilt (AI-prompted) |
|---|---|---|
| Build time | 3 days | 3 minutes |
| Backend setup | Flask + 3 API keys + auth | None |
| Data handling | Pagination, rate limits, error handling | Single prompt |
| NLP accuracy | High (dedicated NLP API) | Good (general-purpose LLM) |
| Cost at scale | Predictable (per-API pricing) | Token costs add up fast |
| Production ready | Yes (with caching) | No (prompt-based, no caching) |
Tech Stack
- Python / Flask — original backend API
- Google Places API — restaurant discovery and review fetching
- Google Natural Language API — sentiment analysis and entity extraction
- Google AI Studio — rapid rebuild for comparison
Lessons Learned
-
AI tools are exceptional for prototyping. What took 3 days of API integration now takes minutes. For feasibility checks and POCs, the speed difference is transformative.
-
Production still needs engineering. The AI-prompted version works for demos but lacks caching, rate limiting, error handling, and cost controls. The gap between "it works" and "it works at scale" hasn't changed.
-
Token costs compound fast. API-based pricing is predictable (per request). LLM token pricing scales with input size — sending full review text through a model is expensive at volume.
-
The shift is in mindset, not just tooling. It's no longer about how much code you write, but how clearly you can express what you want to build. The skill is moving from "implement this algorithm" to "describe this outcome."
-
NLP APIs still win on precision. Google's Natural Language API returns structured sentiment scores and entity types. An LLM gives you natural language analysis that's harder to parse programmatically. For production pipelines, structured output matters.