Coming 2026

Point. See. Book.

GeraLens is a camera-activated service discovery API that identifies what you are looking at and surfaces the right Gera action. Broken sink, skin rash, restaurant menu, medication bottle — point a camera, get the service.

Why this matters in 2030

Meta Ray-Bans, Apple Vision Pro, and Samsung-Google smart glasses are on the path to mainstream $200-500 price points. People will not type “plumber near me” — they will look at their broken sink and ask. They will not type “telemedicine for rash” — they will look at their arm.

Whoever owns the intent-to-action layer from camera imagery to real-world services wins the ambient computing era. The hard part is not computer vision. The hard part is having something to do after you recognise the pipe, the rash, the bottle, the crop. That is Gera’s 22 verticals.

GeraLens is the bridge. Vision model plus intent classifier plus Gera’s supply side. Hardware vendors get an ambient services layer they cannot build themselves. Users get the thing they actually wanted.

How it works

1

Point your camera

Phone, AR glasses, dashcam, smart home hub, or any camera-enabled device. GeraLens ships SDKs plus a neutral REST/WebSocket API.

2

We identify the intent

A vision model plus an intent classifier returns the matching Gera action — or honestly says "not sure" when confidence is low.

3

The right service takes over

The intent routes to Heliodoc, Wrkdo, GeraEats, GeraFarm, or whichever vertical can actually fulfil it. Payment, booking, and follow-up use the existing Gera stack.

Launch intents

Ten object classes where the Gera ecosystem can actually fulfil what the camera sees.

You look atGeraLens does
Broken pipeWrkdo plumber quote and booking
Skin conditionHeliodoc triage and video consult
Restaurant menuGeraEats reorder, translation, allergen warnings
Foreign street signTranslation plus GeraCompliance advisory
House exteriorGeraRent valuation estimate
Product barcodeAgorivo price compare and authenticity check
Medication bottleHeliodoc drug interaction checker plus refill
Agricultural cropGeraFarm yield estimate and market price

Frequently asked questions

What is GeraLens in one sentence?
GeraLens is a camera-activated service discovery API that identifies what you are looking at and surfaces the Gera service that can fulfil it.
How is this different from Google Lens?
Google Lens can identify a plumbing leak. It cannot dispatch a plumber in Tbilisi. GeraLens identifies the same object and immediately connects to Wrkdo to book a professional who will actually show up. Vertical depth, not broader recognition, is the moat.
Which devices does GeraLens work on?
Any device with a camera and internet: phones, AR glasses (Meta Ray-Bans, Apple Vision Pro, Samsung-Google glasses), smart home hubs, dashcams, and industrial cameras. GeraLens ships an SDK for each platform plus a neutral REST and WebSocket API.
Can AR hardware vendors integrate?
Yes. That is a primary channel. Hardware vendors pay an integration fee plus a per-user royalty in exchange for a full ambient-services layer without building 22 supply chains themselves.
What happens to the images I send?
Images are processed to extract an intent and are not retained beyond the immediate transaction unless the user explicitly opts in to history. No training on user images without explicit consent. Full details are in the privacy page.
How accurate is the recognition?
GeraLens is launching with 10 high-confidence object classes where vertical Gera supply can actually fulfil the intent. We would rather say "I am not sure" than hallucinate a plumber for a flower. Accuracy is tracked per class and published.
When can I start using it?
Join the waitlist. Mobile SDK preview in 2026. Full AR hardware partner program targeted for 2027 as mainstream AR glasses reach scale.