← Back to cohort

Syed Muhammad Taha Imam

NUST · 2026
Email
syedmuhammadtaha.dev@gmail.com
Phone
923181272233
LinkedIn
https://www.linkedin.com/in/syed-muhammad-taha-imam/
GitHub

Academic

Program
BSCS
CGPA
3.44
Year
2026
Education
School of Electrical Engineering and Computer Science
Address
Karachi, Pakistan
DOB

Career

Current role
Target role
Skills
Deep Learning, Transformer architectures, Model refinement, Distributed training systems, LLMs, Vision-Language Models, RAG systems, MLOps, Nest.js, Supabase, Next.js, Redis, BullMQ, CI pipelines, Unit testing, Streamlit, Chain-of-Thought prompting, FastAPI, Twitter scraping, Sentiment analysis, Docker, Apache Airflow, DAGs, CI/CD, GitHub Actions, Prompt engineering, Quantization, HNSW, Parallel processing, Batch processing, Literature review, PEFT, Tucker Decomposition, Self-Attention, Large Document Models

Verbatim text

The exact text the LLM saw on the page (or the booklet text from the old import). This is what powers semantic search.
Syed Muhammad Taha Imam
Cell: 923181272233 |  Email: syedmuhammadtaha.dev@gmail.com
LinkedIn: https://www.linkedin.com/in/syed-muhammad-taha-imam/
Address: HOUSE #:1683, STREET 26, PRECINCT 8, BAHRIA TOWNKARACHI , Karachi , Pakistan
PROFESSIONAL PROFILE
Deep Learning–focused Computer Science undergraduate with hands-on research and industry experience in Transformer
architectures, model refinement, and distributed training systems. Proven ability to improve model efficiency through
architectural optimization, parameter reduction, and scalable inference pipelines, with experience spanning LLMs, Vision-
Language Models, RAG systems, and production-grade MLOps. Actively engaged in research-driven development with a strong
inclination toward core DL over black-box model usage.
EDUCATION
BSCS
School of Electrical Engineering and Computer Science , Islamabad , 3.44 (2026)
INTERNSHIP EXPERIENCE
Epistemy UK
01-Sep-2025 - 01-Dec-2025
– Led the end-to-end development of an AI tutoring platform, architecting a Nest.js backend (20+ end- points, Supabase) and a
Next.js frontend. – Architected an event-driven task queue (Redis/BullMQ) to orchestrate multi-agent workflows, ensuring fault
tolerance across distributed analysis pipelines. – Enforced strict SWE standards with CI pipelines and pre-commit hooks, achieving
and maintaining 90%+ unit test coverage.
RapidsAI
01-Sep-2024 - 01-Dec-2024
• Integrated an RAG-based chatbot into a website using Streamlit, enhancing user engagement by 35%. • Optimized chatbot
responses through Chain-of-Thought prompting, reducing incorrect responses by 50%. • Cut OpenAI API costs 50% with a multi-
model query routing system. • Designed FastAPI endpoint for chatbot responses, enabling user session management and contextual
interactions. • Built a Twitter scraper that processed 500+ tweets daily, enabling large-scale sentiment analysis.
CogniMind AI
01-Feb-2025 - 01-May-2025
• Deployed a dockerized Apache Airflow instance on a virtual machine, improving workflow automation efficiency by 40%. • Designed
5+ scalable DAGs in Airflow to orchestrate data pipelines, reducing manual intervention by 60%. • Developed CI/CD pipelines with
GitHub Actions and Docker, cutting deployment time by 30% • Improved VLM document extraction accuracy by 20% using prompt
engineering techniques • Optimized inference pipelines by 10% using parallel and batch processing techniques • Improved retrieval
speed and accuracy by 10% using quantization and HNSW parameter tuning
Bradbury Lab
01-Apr-2025 - 20-Jan-2026
– Conducted a comprehensive literature review on Transformer topology and PEFT, identifying specific inefficiencies in existing
weight-sharing methods. – Proposed a novel layer-merging strategy based on Tucker Decomposition, aiming to reduce parameter
count without retraining – Analyzed the mathematical properties of Self-Attention blocks to demonstrate the feasibility of aligning
Query (Q) and Key (K) projections for future efficient-by-design architectures.
FINAL YEAR PROJECT
LiteDoc: Distilling Large Document Models into Efficient Task-Specific Encoders.

AI enrichment

Syed Muhammad Taha Imam is a Computer Science undergraduate with a 3.44 CGPA, specializing in Deep Learning, LLMs, and MLOps. He has gained practical experience through internships involving AI platform development, RAG systems, and distributed training optimization.
Skills (AI)
["Deep Learning", "Transformer Architectures", "LLMs", "RAG Systems", "MLOps", "Python", "FastAPI", "Nest.js", "Next.js", "Docker", "Apache Airflow", "Redis", "CI/CD", "Prompt Engineering", "Model Quantization"]
Status: ai_done
Provenance
Source file: SEECS - Computer Science-2026.pdf
From job #258 page 113
Created: 1778167261