like calling a helpful friend who always picks up ready to support you, wherever you are.
Just talk to ally like your friend.
your ally understands what you mean and provides helpful responses.
when you ask “what kind of jacket do I need today?”, ally gets it, checks the local weather and recommends the best fit for the day.
you create your own ally, with a voice and personality that suits you.
Whether you Want a smart English butler for work, Or a funny sidekick for fun, You can choose whatever works best for you at any given moment.
your ally always learns about you, your likes and your dislikes.
point at a menu as say “make me a recommendation”, ally will take your dietary preferences into account and suggest options that you’d love.
available where you want it, how you want it
ally is designed to be accessible and easy for everyone to use. It works smoothly with assistive technologies and built-in accessibility features. You can talk to ally on your smartphone, web browser, or smartglasses — and enjoy the same seamless experience everywhere.
1
The first step in ally’s response flow is understanding the user’s intent, using a custom reasoning model built by Envision.
1. Intent Recognition:ally examines the question’s structure and context to identify the user’s intent.
2. Query Understanding:The model determines if the user seeks information, a visual analysis, or an action.
3. Examples:
“Do I need my umbrella today?” → Recognized as a weather-related query
“What am I holding in my hand?” → Interpreted as a visual object recognition request.
This reasoning model ensures ally correctly interprets the user's needs before responding.
2
Available Tools in ally's Arsenal
1. Language Model (LLM):Handles general knowledge, calculations, and reasoning-based queries.
Example: “What is the capital of the Netherlands?”
2. Camera:Captures images for visual identification.
Example: “What is this object?”
3. Visual Language Model (VLM):Analyzes images to describe or extract specific details.
4. OCR (Optical Character Recognition):Reads text from images and documents. Envision’s OCR:
- Detects structured text layouts (e.g., headings, formats)
- Enhances readability and accessibility
5. Weather API:Provides real-time weather information based on location.
6. Calendar Integration:Retrieves events and reminders upon request.
7. Web Search:Extends ally’s responses by sourcing live web data when needed.
3
Contextual Personalization:
1. User Preferences:ally tailors replies using personal details from the "About You" section, such as dietary or professional preferences.
2. Location Awareness:It uses location data to make weather, news, and local suggestions more relevant.
3. Language and Cultural Adjustments:Responses are aligned with the user’s preferred language and cultural context.
4. Response Style Customization:Users can set communication preferences in the "About ally" section, choosing tones like professional, concise, cheerful, or humorous.