I am a Ph.D. Candidate in Computer Science at Kansas State University, under the supervision of Prof. Arslan Munir, working as a Graduate Research Assistant at the Intelligent Systems, Computer Architecture, Analytics and Security Laboratory ( ISCAAS Lab). Earlier, I completed my M.S. in Biomedical Engineering at Istanbul Medipol University, Turkey, and my B.S. in Mechatronics and Control Engineering at UET Lahore.
Research Interests My primary research focuses on reinforcement learning and generative modeling, building agents that learn reliable, long-horizon behavior from imperfect data through latent skill learning, planning under partial observability, and diffusion-based policy refinement. I am also interested in connecting these ideas with vision-language models and LLM-guided planning for embodied agents, alongside applications in Robotics, Health, and Cyber-Physical Systems — reflecting a broader commitment to principled AI with real-world impact.
The center remains fixed on Generative AI + Reinforcement Learning, while the surrounding domains show where these core methods are applied and extended. The goal is not to present unrelated topics, but to show a connected research program spanning robotics, cyber-physical systems, LLM and VLM agents, health AI, autonomous systems, and resilient infrastructure through shared ideas in planning, representation learning, robustness, and decision-making.
Latent-skill offline RL framework for contact-rich robotic manipulation. Improves long-horizon planning while preserving behavior support and low-level execution quality across D4RL, Adroit, and RoboSuite benchmarks.
Masked latent-skill inference for robust long-horizon planning under missing or degraded context. Focuses on partial observability in offline reinforcement learning and context-robust skill sequencing.
Developed the SA3C algorithm with an attention mechanism to improve sample efficiency and decision quality for low-thrust spacecraft trajectory optimization in geocentric and cislunar missions.
Developed a novel Cascaded Deep Reinforcement Learning (CDRL) approach to optimize low-thrust spacecraft trajectory planning, significantly improving time-efficient orbit transfers in complex multi-body environments for transfers to GEO and NRHO.
Introduced a resilient neural coordination framework for grid-forming inverter networks that maintains stability and coordination under cyberattacks in smart-grid environments.
Developed a machine-learning-assisted method for optimizing low-thrust orbit-raising trajectories, integrating a sequential algorithm with a neural network-based high-level planner and benchmarking it against deep reinforcement learning approaches for geostationary and halo-orbit missions.
Developed a cascaded DRL model for optimizing long-duration, low-thrust spacecraft transfers from GTO to GEO. Guided by a gradient-aided reward function, the method significantly reduces transfer time and improves spacecraft autonomy in complex multi-revolution transfers.
Adaptive multi-teacher knowledge distillation framework aimed at improving adversarial robustness beyond standard single-teacher or conventional training approaches.
Proposed a simple and efficient domain generalization approach that augments source domains by exploring dominant modes of variation in the feature space, improving generalization to unseen domains across standard DG benchmarks.
Developed and compared machine learning models for laboratory earthquake prediction using LANL data, where CNN-LSTM models improved time-to-failure prediction over hand-crafted approaches.
Demonstrated trajectory-based neuroprosthetic control in rodents using primary motor cortex activity, providing a cost-effective platform for studying brain-machine interfaces and neural control.