We spoke with Oleh Chaplia, ELEKS' Senior Software Developer, about the latest breakthroughs in AI-driven robotics, highlighting the transformative potential of these technologies and the exciting opportunities they present.
Artificial intelligence enables software and hardware to reason and think intelligently. Continuous research to make robots more adaptive, intelligent, and flexible drives the industry. AI is the foundation for modern robotics, mainly through machine learning and deep learning approaches. It allows robots to process and interpret data and adapt to new environments. We review several insightful examples.
Figure AI launched Helix, a Vision-Language-Action (VLA) model enabling robots to interpret natural language commands and manipulate unfamiliar objects without prior programming. The company plans to deploy 100,000 human-like robots within four years.
Figure's robots have been tested in automotive manufacturing in collaboration with BMW, successfully performing tasks like inserting sheet metal parts into assembly fixtures.
Clone Robotics has unveiled the Protoclone, a humanoid robot designed to closely mimic human anatomy. It features an entire skeletal system, 1,000+ artificial muscles (Myofibers), and a hydraulic system replicating human muscle function. The Protoclone can process visual input and learn from human actions with the help of depth cameras, inertial sensors, and pressure sensors.
In a recent demonstration, it was shown suspended and flexing its limbs, showcasing its lifelike motion. A limited "Alpha" edition is set for pre-orders in late 2025. However, the robot remains in early development, requiring suspension for stability and lacking independent balance. The Protoclone marks a significant step in biomimetic robotics, with potential applications in industries needing human-like dexterity.
Meta FAIR's sensing and tactile research enhances AI's ability to interact with the physical world through advanced touch perception. They developed Meta Sparsh, a general-purpose tactile encoder, and Meta Digit 360, a high-precision fingertip sensor with multimodal sensing capabilities. These technologies push the boundaries of robotic dexterity.
Robots can now detect minute forces and analyse surface textures. They can also interpret physical interactions in real-time. Meta Digit Plexus provides a standardised platform for integrating tactile sensors into robotic hands.
Google DeepMind's Gemini Robotics is a Vision-Language-Action (VLA) model built upon Gemini 2.0, designed to bring AI reasoning and multimodal capabilities into the physical world. It enables robots to perform complex manipulation tasks with smooth and adaptive movements. Gemini Robotics-ER, an enhanced AI system with 3D spatial understanding, trajectory prediction, and object-grasping capabilities, powers the model. Notably, Gemini Robotics can specialise in new dexterous tasks, such as folding an origami fox or playing a game of cards, with just 100 demonstrations. It also supports zero-shot and few-shot learning, allowing quick adaptation to new robots and tasks. The system prioritises safety considerations for real-world applications, marking a significant step toward general-purpose, AI-powered robotic systems.
AI will change how we work, travel, and interact with machines. In the future, AI could become so advanced that robots will think, solve problems, and make decisions autonomously, bringing us closer to human-like intelligence. Robots will soon learn faster, adapting to new tasks with minimal programming. They will improve their understanding of the world by integrating sight, touch, and speech, making them more helpful in helping people and factory work. Improvements in movement and control will help robots handle delicate jobs, such as surgery or assembling small parts.
The concept of a cloud mind takes this further by allowing robots to share knowledge and insights, collectively improving their problem-solving abilities. It enables robots to access powerful AI models and vast datasets through the internet. By connecting to a shared intelligence network, robots will use it to continuously learn from each other’s experiences.
Self-driving cars, drones, and robotic delivery systems will also advance. Robots designed for personal care will assist elderly and disabled individuals, offering emotional and medical support. Many businesses will rent robots instead of buying them, making automation more accessible.
Key advancements will include:
General AI systems mean AI systems that can perform a wide range of tasks across different domains but are still specialised in their functions. They can also learn, reason, and adapt to new challenges within their trained domain.
Current AI models, such as OpenAI's GPT-4, demonstrate progress toward this goal, handling multiple tasks using a single architecture. However, they still require extensive training and lack genuine autonomy in decision-making.
Artificial General Intelligence (AGI) is the next step beyond General AI. AI systems will possess human-like intelligence, the ability to learn from experience, and the ability to apply knowledge across completely new situations without prior programming. An AGI would be capable of abstract thinking, problem-solving, and self-improvement, making it functionally indistinguishable from human cognition.
The most exciting aspect of AI's future is its convergence with biology and neuroscience. It will pave the way for bio-inspired computing, organoid intelligence, brain-computer interfaces (BCIs), and ambient AI.
These advancements could lead to self-improving AI systems that think, reason, and innovate autonomously, pushing AI beyond traditional machine and deep learning. The possibility of AGI emerging from a hybrid of biological and artificial intelligence is groundbreaking, opening doors to powerful and deeply integrated AI with human cognition and life.
AI is reshaping the job market, but it's more about transformation than outright replacement. Many repetitive, rule-based tasks are being automated, creating demand for new roles—such as AI trainers, prompt engineers, and human-AI collaboration specialists. The real challenge is reskilling workers for this shift. Therefore, the expertise in AI consulting and data engineering will be vital in making AI services and products a reality.
Rather than eliminating jobs, AI is changing which skills are most valuable. The workforce must adapt by focusing on critical thinking, creativity, and collaboration with AI systems. Instead of fearing AI, workers should focus on learning how to work alongside it.
AI helps robots perceive, learn, and make decisions, allowing them to operate autonomously. It enhances automation in industries like healthcare, manufacturing, and logistics.
The breadth of knowledge and understanding that ELEKS has within its walls allows us to leverage that expertise to make superior deliverables for our customers. When you work with ELEKS, you are working with the top 1% of the aptitude and engineering excellence of the whole country.
Right from the start, we really liked ELEKS’ commitment and engagement. They came to us with their best people to try to understand our context, our business idea, and developed the first prototype with us. They were very professional and very customer oriented. I think, without ELEKS it probably would not have been possible to have such a successful product in such a short period of time.
ELEKS has been involved in the development of a number of our consumer-facing websites and mobile applications that allow our customers to easily track their shipments, get the information they need as well as stay in touch with us. We’ve appreciated the level of ELEKS’ expertise, responsiveness and attention to details.