← Back to cohort

Mesha Farrukh

FAST · 2021 · i17 - 0048
Email
Phone
LinkedIn
GitHub

Academic

Program
CGPA
Year
2021
Education
Address
DOB

Career

Current role
Target role
Skills
Flutter, Dart, Tflite, NumPy, Tensorflow, Python

Verbatim text

The exact text the LLM saw on the page (or the booklet text from the old import). This is what powers semantic search.
Uvea-  Image to Speech for the visually impaired
Uvea is an image to speech accessibility mobile application for the blind and visually impaired, 
utilizing object detection and classification via deep learning techniques.
The visually impaired face different challenges in their lives.  A lot of them are simple day-to-day
tasks such as money counting, moving without collision knowing what macro objects are in their
path, etc.
In 2010, WHO estimated that about 285 million people are visually impaired, of whom 39 million
are blind. It is also predicted that by 2030, the numbers would rise to 330 million, and 55 million
respectively. With an ever increase in visually impaired people, the need for technology that assists
this very large market would also rise continually.
Features include:
Money Counter - allow users to count their money using the phone’s camera.
Collision Prevention - Object detection and classification in pre-set zones to prevent collision. It’ll be
limited to an indoor environment. The included objects are the following:
● Furniture: sofa, table, bed, chairs
● Walls
● Doors
Staircase Detection - Detection and classification (upstairs, downstairs)
Feedback via voice - to listen to the instructions, detected objects and current perspective
Input via haptic touch - to get instructions and switch perspective








Technology Used:
Flutter, Dart, Tflite, NumPy, Tensorflow, Python
Supervisor Name:
Dr. Omer Ishaq
Dr. Kashif Saghar
Group Members:   
Mesha Farrukh (i17 - 0048)

AI enrichment

Mesha Farrukh is a student who contributed to the development of Uvea, a mobile application designed to assist visually impaired individuals through image-to-speech functionality. The project utilized deep learning for object detection and classification to address challenges such as money counting and collision prevention.
Skills (AI)
["Flutter", "Dart", "TensorFlow", "TFLite", "Python", "NumPy", "Object Detection", "Deep Learning", "Mobile Application Development"]
Status: ai_done
Provenance
Source file: Graduate Directory FAST School of Computing 2021 (1st Final) (1).pdf
From job #24 page 242
Created: 1778144159