AI & IoT Powered Research Infrastructure: Sensor Systems, Robotics, and AI within the bodh scientific Ecosystem

01 Mar 2026

Nischay Mittal, AI Developer Trainee at SarthhakAI

Satvik Kalra, Co-founder, COO at SarthhakAI

Abstract

Modern industrial research and development is rapidly transitioning toward data-driven, AI-assisted workflows. Organizations operating in domains such as packaging, material science, polymers, and advanced manufacturing increasingly require platforms that unify experimentation, automation, and analytics. SarthhakAI addresses this need through bodh scientific, an AI-driven R&D platform designed to manage experiments, structure data, and enable intelligent decision-making across laboratory environments.

This whitepaper presents three AI-enabled workstations developed by SarthhakAI in collaboration with the Indian Institute of Packaging (IIP), Delhi. Each workstation demonstrates a distinct aspect of applied AI while maintaining a clear integration strategy with the bodh scientific platform:

  1. bodhi Bot Workstation – a conversational, agentic AI system directly connected to the Talk-to-bodhi agent within bodh scientific.
  2. Robotic Arm Workstation – an autonomous, edge-driven automation system operating independently, with future platform integration.
  3. Smart Sensor Workstation – a distributed IoT sensing system that uploads structured experimental datasets into bodh scientific projects.

Together, these workstations illustrate how physical AI systems can be architected to support scalable, industry-aligned R&D workflows while remaining modular, extensible, and research-friendly.

Introduction

  Context

Industrial research and development environments are undergoing a fundamental shift toward AI-assisted, data-centric, and automation-driven workflows as industry 4.0 progresses. Traditional R&D laboratory setups are often siloed, manually operated, and poorly integrated—are increasingly unable to meet modern demands for scalability, traceability, and speed of innovation. Labs that have data logging and storage

technologies, sometimes fail to use the data efficiently and effectively to generate insights, improve operations and processes while reducing time and costs.

SarthhakAI addresses these challenges through an integrated ecosystem of AI-enabled physical workstations connected via bodh scientific, a homegrown AI-driven R&D platform. The workstations have been deployed at IIP (Indian Institute of Packaging) Delhi as purpose-built prototypes for experimentation, data management, and intelligent interaction under collaboration with SarthhakAI.

This whitepaper presents a high-level architectural and use-case-driven view of the SarthhakAI ecosystem. It is intended for industry stakeholders, research organizations, and engineering teams evaluating AI-driven laboratory platforms as clients or collaborators.

 Objectives

The primary objectives of this whitepaper are as follows:

  1. Present a Reference Architecture for AI-Driven R&D Labs: To define a clear, production-oriented architectural model for integrating AI, robotics, and IoT systems within modern industrial and research laboratories.
  2. Demonstrate Platform-Centric Experiment Management: To illustrate how bodh scientific functions as a centralized intelligence and data layer, enabling structured experiment tracking, traceability, and AI-assisted analysis across heterogeneous workstations.
  3. Explain Workstation-Level Design and Responsibilities: To describe, at a high level, the architecture, role, and value of each workstation- bodhi Bot, Smart Sensor, and Robotic Arm- within the broader ecosystem.
  4. Highlight Industry-Relevant Use Cases: To map each workstation to real-world industrial and applied research scenarios, particularly in domains such as packaging, material science, and advanced manufacturing.

Workstation 1: The bodhi Bot

Architecture

The bodhi Bot Workstation is architected as a human-facing, agentic AI system that bridges physical laboratory environments with the digital intelligence of the bodh scientific platform. Its primary purpose is to serve as an embodied interface to AI-driven R&D workflows, enabling natural, conversational interaction between researchers and experimental data.

At a high level, the architecture follows a thin-edge, intelligent-backend model:

Edge Layer (Embodied Interface): The physical bodhi Bot is powered by a robot-mounted single-board computer (Raspberry Pi class) responsible for speech input/output, basic perception, user interaction, and connectivity management.

AI Intelligence Layer (bodh scientific): All higher-order reasoning, natural language understanding, contextual awareness, and experiment-specific intelligence is delegated to the Talk to bodhi agent hosted within the bodh scientific platform.

A diagram of a software system

AI-generated content may be incorrect.

Communication Layer: Secure, low-latency LAN-based APIs and socket channels ensure reliable bidirectional communication between the robot and the platform.

This separation ensures that the bodhi Bot remains lightweight, upgradeable, and safe for deployment in laboratory environments, while the AI backend continuously evolves without requiring changes to physical hardware.

Features and Workflows

The bodhi Bot Workstation supports a range of features centred around conversational AI, contextual lab assistance and knowledge-driven interaction. In addition to language-based intelligence, bodhi Bot incorporates also computer vision capabilities that enable it to perceive and interact with the physical laboratory environment.

Knowledge Access and Intelligence Model

The Talk to bodhi agent operates on a hybrid knowledge model, combining platform-specific documents with its broader foundational intelligence:

  • Primary Knowledge Base (Platform-Scoped): The agent has direct, governed access to all documents users upload to the bodh scientific platform. This includes SOPs, Research papers, PDFs, and technical documentation andProject notes, observations, and experiment metadata. All responses generated by the agent are contextually grounded in this uploaded material when relevant, ensuring that answers are aligned with the organization’s proprietary knowledge.
  • Foundational Knowledge (General Intelligence): In addition to platform-specific data, the agent possesses broad domain knowledge across science, engineering, AI, and industrial R&D practices. This enables it to explain concepts and methodologies, provide comparative reasoning and best-practice guidance and assist in experimental planning. Foundational knowledge is used to augment, not override, platform-specific information.

Visual Perception Capabilities

Beyond conversational intelligence, bodhi Bot integrates on-device computer vision to enhance situational awareness and interaction with physical artifacts in the lab.

  • Object Detection: The robot can identify and localize predefined objects such as different packaging types, containers, and equipment. This enables context-aware responses, for example by recognizing which apparatus a user is referring to during a conversation.
  • QR Code Recognition: bodhi Bot can scan and interpret QR codes attached to samples, equipment, or experiment stations. QR data can be used to identify experiment IDs or batch numbers, retrieve associated documentation from bodh scientific or confirm sample provenance during workflows.

These perception features allow bodhi Bot to bridge the gap between digital experiment records and physical laboratory entities.

Use-cases

Live Document-Grounded Q&A for Researchers

During complex experiments, researchers often have ad-hoc technical questions (e.g. “What reagent neutralizes this sample?” or “What’s the next incubation step?”). Stopping to search through manuals or data can disrupt the workflow. bodhi Bot provides an intelligent chat interface connected to the bodh scientific knowledge base. It enables on-demand, document-grounded answers: researchers ask questions in natural language (voice or text) and receive precise, context-aware responses.

Solution & Workflow: At any point in a lab session, a researcher can ask bodhi Bot a question about the experiment or equipment:

  • Question Intake: The user speaks or types a question. bodhi Bot uses NLP to interpret the intent.
  • Retrieval: The bot searches bodh scientific’s databases for relevant content – SOPs, lab notebooks, safety datasheets, and past experiment records. It retrieves document excerpts related to the query.
  • Response Generation: A large language model integrated into bodhi Bot uses a RAG pipeline to formulate an answer. The response is grounded on the retrieved documents, ensuring factual accuracy.
  • Answer Presentation: bodhi Bot delivers the answer via its  voice. It can also cite citations or snippets from the source documents.
  • Logging: The question and answer are logged in bodh scientific with references. This log helps train future assistants and ensures users can later review the source of information.

bodh scientific Integration: bodh scientific acts as the central data and document repository. It stores protocol PDFs, experimental data, and knowledge articles. For example, if the question is about a chemical hazard, bodhi Bot pulls the safety sheet from Bodh’s library. This integration guarantees that answers are up-to-date and traceable, adhering to the principle that responses should be “factually anchored” in real documents. Over time, Bodh’s analytics track common questions to improve documentation.

The Value: Researchers never have to leave the bench to find information. Real-time, context-aware answers accelerate decision-making and reduce experimental errors. The document-grounded approach builds confidence in AI recommendations and supports traceability – every answer is backed by logged evidence. bodhi Bot thus transforms knowledge access into an interactive lab assistant, increasing productivity and learning.

QR-Based Sample Validation and Experiment Lookup

In chemical and biological labs, samples and reagents are labelled with QR codes or barcodes to track their identity. Mislabelling or misplacing samples can ruin experiments and violate compliance. bodhi Bot’s vision system uses QR scanning and object recognition to validate sample identity on the fly. When a scientist picks up a sample tube or reagent container, bodhi Bot scans the code and confirms that it matches the intended experiment.

Solution & Workflow: bodhi Bot streamlines sample handling as follows:

  • Scan & Identify: The researcher holds the sample or reagent container in view of the bodhi Bot camera. bodhi Bot detects the QR/barcode and reads the identifier.
  • Database Lookup: The scanned ID is sent to bodh scientific, which maintains a registry of samples, reagents, and their associated experiments. bodhi Bot retrieves details such as sample type, collection date, concentration, and linked protocols.
  • Contextual Info Display: bodhi Bot dynamically displays the associated experiment’s SOP, expected results range, and any safety notes. For example, scanning a blood sample’s QR might immediately show which assay and dilution to use, or highlight the next step in the workflow.
  • Validation & Alerting: If the scanned code does not match the expected item for the current experiment, bodhi Bot alerts the user and blocks further steps. All scans (user, timestamp, location) are logged back to bodh scientific, creating a complete audit trail. This detailed chain-of-custody is critical for traceability and regulatory compliance.

bodh scientific Integration: The Bodh platform serves as the sample inventory LIMS. Each sample record on Bodh includes metadata and links to experiments and results. bodhi Bot’s QR lookup queries this LIMS to ensure real-time consistency. For example, if a future experiment requires using Sample ABC, scanning that sample will automatically retrieve and display the linked experiment plan. The seamless LIMS integration realizes the vision of dynamic, QR-driven lab workflows described in best practices.

The Value: Automated QR validation prevents mix-ups and enforces correct sample usage. Researchers gain immediate visibility into sample provenance and experiment context without manual data entry. The embedded logging enhances reproducibility and compliance: every sample movement is time-stamped and user-attributed. This saves time and reduces errors, while satisfying stringent industry requirements for sample tracking in R&D.

Workstation 2: Robotic Arm

Architecture

The Robotic Arm Workstation is designed as an edge-autonomous robotic manipulation system focused on precision, safety, and real-time responsiveness. Unlike other workstations, it intentionally operates independently of the bodh scientific platform.

The architecture includes:

  • Edge Compute Layer: An NVIDIA Jetson–class device executes perception, planning, and control workloads locally.
  • Perception Layer: depth camera provides spatial awareness and object localization.
  • Control and Actuation Layer: A ROS(2)-based control stack manages motion planning, inverse kinematics, and actuator commands.

This fully edge-resident architecture ensures deterministic performance and minimizes latency.

Features and Workflows

  1.  Manual Control Mode (Mobile App–Based)

The robotic arm can be operated directly through a dedicated mobile application, allowing users to control joint movements and gripper actions remotely. This mode is designed for tasks requiring human judgment, fine positioning, demonstrations, or calibration. Real-time feedback from arm sensors ensures accurate execution of commands, while built-in safety monitoring maintains operational limits during manual interaction.

  1.  Automatic Mode (Vision-Guided Operation)

In automatic mode, the system uses 3D vision to detect and identify target objects within its workspace. The control system calculates the required grasp pose using inverse kinematics and executes the motion autonomously. This enables repeatable and precise pick-and-place or grasping tasks, supporting applications in experimental automation, material handling, and intelligent manipulation without continuous human control.

Use-cases

Automated Pick-and-Place for Packaging Prototypes

This use case focuses on automated pick-and-place operations for handling packaging components such as boxes, containers, inserts, or sample packets. The robotic arm is programmed to pick items from a source location and place them accurately at predefined target positions. The primary experimental context is validating physical handling feasibility and repeatability during early-stage packaging design and prototyping.

Architecturally, the robotic arm operates as a standalone mechatronic system with servo motors, end-effectors (grippers), and a local control unit. Motion paths and grip parameters are preprogrammed or adjusted manually. The system executes deterministic movement sequences without dependency on cloud services. When required, summary execution logs (cycle count, success/failure) can be optionally recorded externally for documentation.

Steps:
(1) Define pick and place coordinates;
(2) Load packaging components at the source location;
(3) Robotic arm grips the object and transfers it to the target position;
(4) Cycle repeats for multiple items;
(5) Operator evaluates placement accuracy and repeatability.

Platform Role: The robotic arm functions independently. If integrated, bodh scientific may receive high-level outcome data (e.g., number of successful cycles) but does not control the arm or its logic.

Value: Engineers validate automation feasibility early in the design phase. Industry teams assess whether packaging geometry supports robotic handling. In academic labs, students learn foundational automation and industrial robotics concepts.

Repetitive Mechanical Stress Testing Through Controlled Motion

This use case involves using the robotic arm to apply repetitive, controlled mechanical motion to packaging materials or components, simulating wear, handling stress, or repeated use. Examples include repeated lid opening/closing, bending of flexible materials, or insertion–removal cycles.

Architecturally, the robotic arm executes predefined motion loops with consistent speed, force (via torque limits), and repetition count. Unlike manual testing, the arm ensures uniform stress application across thousands of cycles. The workstation remains fully offline-capable, ensuring deterministic execution without latency or network dependency.

Steps:
(1) Mount the packaging component or test sample;
(2) Program motion parameters (range, speed, repetitions);
(3) Robotic arm executes repetitive motion cycles;
(4) Test stops after a defined cycle count or visible failure;
(5) Physical degradation is manually or sensor evaluated.

Value: Researchers gain repeatable mechanical fatigue data. Industry users evaluate long-term durability without human fatigue bias. Students observe how automation enables controlled stress testing.

Precision Object Sorting and Physical Classification

This use case focuses on sorting physical objects based on predefined categories such as size, shape, or weight class. The robotic arm is used to move objects into different bins or zones after classification logic is applied externally or manually configured. Architecturally, the arm follows deterministic paths to transfer objects from a common input area to designated output locations. Classification decisions may be based on operator input, pre-labelled samples, or integration with external sensing systems, but the robotic arm itself remains a motion execution system.

  • Steps:
    (1) Place mixed objects in the input area;
    (2) Define sorting categories and target locations;
    (3) Robotic arm picks an object and places it into the assigned category zone;
    (4) Process repeats until all objects are sorted;
    (5) Sorted batches are manually verified.
  • Value: Demonstrates industrial automation fundamentals. Enables rapid prototyping of sorting logic. Useful in education for understanding automation pipelines without full factory infrastructure.

Workstation 3: Smart Sensors

Architecture

The Smart Sensor Workstation is architected as a distributed, experiment-centric IoT data acquisition system that directly feeds structured experimental data into the bodh scientific platform. It is designed specifically to gain data-driven insights on the experimental data from bodh scientific platform.

At a high level, the architecture consists of three layers:

  • Sensor Layer (Edge Nodes): Individual ESP32 / ESP32-C6–based sensor nodes (kits) interface with physical sensors measuring parameters such as temperature, humidity, pressure, or other experiment-specific variables. Each node performs local sampling, preprocessing, and validation to ensure signal integrity.
  • Aggregation Layer (Local Hub): A Raspberry Pi–class edge server aggregates data streams from multiple sensor nodes over a dedicated LAN. This layer performs timestamp alignment, sanity checks, and batching of readings before upload.
  • Platform Integration Layer: Validated datasets are uploaded to bodh scientific via experiment APIs. Each upload is associated with a specific project and results in the creation of a discrete experiment record.

This layered approach ensures resilience to network disruptions, modular scalability, and clean separation between sensing, aggregation, and experiment management.

Features and Workflows

Sensor / Kit

Purpose / Intended Use

Packet Isolation Detection

Measure parameters such as temperature, pressure, and humidity to evaluate protective performance of packaging material.

RFID Warehousing System

Track product ID, information, and location for warehouse management and inventory monitoring.

Photosensitivity System

Measure light sensitivity of different packaging materials using 5-LDR array.

Soil Moisture Detection System

Determine moisture content within packaged product.

Air Quality Monitoring System

Detect gas levels (MQ sensors like MQ-4, MQ-6) from packaging material.

Load Capacity Measurement

Evaluate the mechanical strength of packaging material by measuring load/weight it can bear.

A diagram of a flowchart

AI-generated content may be incorrect.

The current setup consists of the above sensor kits; each designed for a specific purpose.

Use-cases

Packet Isolation Detection System

This use case evaluates how effectively sealed packaging isolates internal contents from external environmental changes. Temperature, humidity, and pressure sensors are placed both inside and outside test packets, and the Smart Sensor Workstation (SSW) continuously logs the differential readings with timestamps and sample metadata. The SSW normalizes and streams this data into the Bodh Scientific ecosystem, where each trial is stored as a structured experiment sheet. Analysts observe trends such as pressure gradients or moisture ingress, enabling rapid identification of micro-leaks or seal failures. This automated, traceable workflow replaces manual logging and provides compliance-ready time-series proof of packaging integrity for industry, while giving students real-time visualization of environmental barrier performance.

Photosensitivity Testing System

This scenario measures how packaging or materials respond to light exposure. Samples are placed under controlled illumination, and an LDR sensor array connected to the SSW records intensity profiles over time. The workstation structures these readings with exposure parameters and environmental context before sending them into Bodh. The platform correlates light data with material characteristics to compute degradation indicators such as intensity-response curves or sensitivity factors. Engineers use this to select light-resistant materials in food, cosmetics, or pharmaceutical packaging, while researchers and students gain quantitative insight into photochemical effects through automatically generated experiment records and visual reports.

Soil Moisture Detection System

This use-case monitors moisture dynamics in stored materials or environmental samples. Moisture probes embedded in soil or granular material feed continuous data to the SSW, which timestamps and structures volumetric water content alongside ambient conditions. Over time, Bodh aggregates these datasets to reveal absorption patterns, storage condition impacts, or threshold breaches. This supports R&D on moisture-control packaging and storage optimization, providing early warnings against spoilage risks. For education, it demonstrates precision environmental monitoring and long-term data analysis in a real IoT-driven research loop.

Air Quality Monitoring System

Gas sensors connected to the SSW continuously measure air composition (e.g., CO₂, CO, VOCs) in lab or production environments. The workstation structures concentration readings with location and calibration metadata before transmitting them into Bodh for real-time analysis. Trend tracking and threshold evaluation help identify leaks or unsafe accumulation, ensuring safety compliance and environmental monitoring. Facilities benefit from automated incident logging and regulatory documentation, while learners see industrial safety systems operating through live dashboards and alert workflows.

RFID Warehousing System

RFID tags attached to laboratory assets or materials are read by networked scanners, with the SSW logging each movement event (item ID, location, timestamp). This creates a live inventory flow dataset that Bodh maintains as operational knowledge. Stock levels, movement histories, and threshold alerts enable intelligent inventory control and automated replenishment decisions. Researchers reduce manual counting, industries minimize stockouts or overstocking, and students observe real-time digital supply-chain tracking integrated into a research environment.

Load Capacity Measurement System

Packaging samples are tested on a load rig equipped with distributed load cells. The SSW records force readings as loads increase, structuring them with sample metadata and timestamps. Bodh then computes mechanical strength metrics and identifies failure thresholds, enabling systematic comparison of materials. This provides industry with validated durability data for logistics and storage, while offering students exposure to real mechanical testing workflows powered by sensor analytics.

bodh scientific Role in Use-Cases

Across all the Smart Sensor Workstation experiments, the Bodh Scientific ecosystem functions as the centralized R&D intelligence layer. It ingests structured sensor data, attaches experiment metadata, maintains versioned experiment sheets, and applies analytics for visualization, threshold monitoring, and report generation. By acting as a unified knowledge base, Bodh enables traceability, cross-experiment comparison, compliance documentation, and automated insights — transforming raw sensor streams into actionable research intelligence.

Conclusion and Future Scope

5.1 Conclusion

The workstations described in this whitepaper demonstrate that modular, sensor-driven and AI-enabled research systems can move beyond prototype environments into industry-ready deployment. Each workstation — whether focused on environmental sensing, intelligent inventory, chemical testing, or autonomous interaction — operates as a functional solution while remaining connected to the Bodh Scientific ecosystem. This integration ensures that data is not isolated at the device level but becomes part of a structured, analysable, and traceable research intelligence layer. As a result, organizations gain standardized experiment logging, automated analysis, and compliance-ready documentation without relying on fragmented tools or manual processes.

At the same time, this infrastructure plays a crucial role in workforce development. By exposing students to real sensor networks, AI-assisted analytics, robotic systems, and cloud-linked experimentation workflows, the platform bridges the gap between academic labs and modern industry environments. Learners do not just perform experiments; they operate within a digital R&D ecosystem that mirrors industrial practices. This combination of practical hardware interaction and data-centric thinking equips future engineers and researchers with the interdisciplinary skills demanded by Industry 4.0 and AI-driven innovation sectors.

Importantly, the ecosystem is designed with secure data handling in mind: experiment data is uploaded through protected channels and maintained in logically separated project environments, ensuring that users’ datasets remain isolated, private, and accessible only within their authorized research scope.

5.2. Future Scope

The long-term vision extends toward the development of bodh scientific–controlled Continuum Labs, delivered as a scalable infrastructure model. In this approach, fully equipped research laboratories containing Smart Sensor Workstations, robotic systems, and AI-enabled devices are physically hosted and managed under the Bodh Scientific ecosystem. Researchers, institutions, and industry partners can access these labs remotely, operating experiments, sensors, and devices through platform interfaces without being physically present. This transforms laboratory infrastructure into an on-demand, remotely operable research environment, conceptually similar to cloud computing but applied to physical experimentation.

Such a model enables users to run tests, collect real sensor data, interact with robotic systems, and retrieve structured experiment sheets entirely through Bodh-enabled workstation functionalities. It reduces infrastructure costs for smaller institutions, increases equipment utilization efficiency, and allows geographically distributed researchers to work on shared physical systems. It also promotes standardized experimental procedures, since hardware configurations and data pipelines are centrally governed within the ecosystem.

Further enhancements can expand both capability and intelligence across the platform. Future iterations may include deeper AI integration for predictive analytics, anomaly detection, and automated experiment optimization. Additional workstation modules — such as advanced vision systems, spectroscopy tools, or autonomous mobile robotics — can broaden experimental coverage. Edge intelligence improvements may allow more preprocessing and decision-making at the workstation level, reducing latency and enabling real-time control loops. Digital twin models of experiments and equipment could provide simulation-backed validation before physical testing. Enhanced interoperability standards would allow third-party devices and external lab systems to connect seamlessly, strengthening the platform’s role as an open research infrastructure.

Together, these advancements position the Bodh Scientific ecosystem not merely as a set of connected tools, but as the foundation for Continuum Labs - an evolving, intelligent research infrastructure designed to support future industrial and academic innovation at scale.

Acknowledgment

The authors would like to express sincere appreciation to the SarthhakAI team for their technical vision, engineering expertise, and sustained effort in conceptualizing, designing, and developing the intelligent workstation solutions described in this whitepaper. Their work in building the Bodh Scientific ecosystem and integrating AI, sensing, and automation technologies has been central to realizing a practical, industry-relevant research infrastructure.

We also gratefully acknowledge the collaboration and support of the IIP team, whose contributions in deployment, experimentation, and workstation-level implementation played an important role in validating and operationalizing these systems. Their involvement helped bridge the gap between platform architecture and real-world laboratory application.

We extend special thanks to the following IIP team members for their contributions to the workstation development and testing efforts:

Dr. Tanweer Alam-  Add. Director of IIP Delhi
Dr. Anup Ghosh- Professor (Emeritus) IIP & IIT Delhi
Mr. Rahul Tirpude- Deputy Director at IIP Delhi

Their combined efforts have enabled the successful translation of research concepts into functioning, connected laboratory systems.

Appendix

bodhi Bot Workflow

  1. System Start / Power On: The bodhi Bot device is powered on, initiating the software and hardware startup sequence.
  2. Startup Scripts Execution: Core services, AI modules, vision systems, microphone input, speakers, and communication services are initialized.
  3. Audio Listening Mode Activated: The system continuously listens for incoming audio input from users through onboard microphones.
  4. Wake Word Detection Check: Incoming audio is analysed in real time to detect the predefined wake word.
  5. Return to Listening (If Wake Word Not Detected): If no wake word is recognized, the system continues listening without triggering further action.
  6. User Acknowledgment (If Wake Word Detected): Upon wake word detection, the bot acknowledges the user through a gesture (e.g., hand wave) or visual cue.
  7. Audio Recording and Transmission: The user’s spoken input is recorded and sent to the onboard or connected CPU for processing.
  8. Speech-to-Text Transcription: The recorded audio is converted into text for intent and query analysis.
  9. User Query Classification: The transcribed text is analysed to determine the query type (General knowledge query / Object/QR-related query)
  10. QR or Object-Related Query Routing: If the query relates to a visible object or QR code, the system activates the vision pipeline.
  11. Object Detection and QR Reading: The vision module identifies objects in view or scans QR codes to extract associated data.
  12. Object Detection Service Processing: Detected object data is processed and interpreted by the object detection service.
  13. Output Display (Vision Results): Recognized object details or QR-derived information are displayed or prepared for verbal response.
  14. General Query Routing to bodhi AI Platform: For general questions, the text query is sent to the bodh scientific platform’s “Talk to bodhi” AI agent.
  15. Response Generation: The bodhi AI platform processes the query using its knowledge base, including user-uploaded documents.
  16. Response Reception: The generated answer is returned from the platform to the bodhi Bot system.
  17. Voice Response Delivery: The bot converts the response to speech and communicates the answer to the user.
  18. Result Output / Display: Responses, object data, or QR information may also be shown on the display interface.
  19. Return to Listening Mode: After completing the interaction, the system returns to passive listening for the next wake word.

Robotic Arm WorkFlow

  1. System Start: The robotic arm control application is launched, initiating the system runtime sequence.
  2. Hardware and Software Initialization: ROS nodes, motor drivers, sensors, cameras, and the user interface are initialized, ensuring all communication channels are active.
  3. System Health Monitoring Activation: Continuous monitoring of motor current and temperature begins to ensure safe operating conditions.
  4. Health Status Evaluation: Sensor readings are evaluated to determine whether any electrical or thermal fault condition exists.
  5. Fault Handling: If abnormal parameters are detected, all motion stops immediately and an error notification is displayed.
  6. Safe Shutdown Procedure (Fault Path): Diagnostic data is logged, components are safely powered down, and the system terminates.
  7. Main Menu Display: If no fault is present, the UI displays control options and waits for user input.
  8. Control Mode Selection: The operator selects Manual Control, Automatic Grasp Mode, or Shutdown.
  9. Manual Control Activation: The user issues direct arm movement commands through the touchscreen interface.
  10. Manual Command Execution: The system translates user inputs into joint-level motor instructions and executes motion.
  11. Arm Feedback Monitoring: Joint positions, loads, and motion accuracy are continuously verified against commanded values.
  12. Object Detection and Identification (Automatic Mode): The vision system captures scene data and identifies the target object using 3D perception.
  13. Grasp Pose Computation: Inverse kinematics algorithms compute the required joint angles and end-effector pose.
  14. Grasp Motion Execution: The arm moves to the target, aligns the gripper, performs closure, and attempts object lift.
  15. Grasp Success Evaluation: Sensors or vision feedback determine whether the object has been successfully secured.
  16. Success Notification (If Successful): A success message is displayed, and the system returns to the main menu.
  17. Retry Decision (If failed): If the grasp fails, the system checks whether a retry is requested.
  18. If Retry = Yes: The grasp process repeats from motion execution.
  19. If Retry = No: The system exits the grasp routine and returns to the main menu.
  20. Shutdown Selection: If shutdown is chosen, the system exits operational modes.
  21. Safe Shutdown and Data Logging: Operational logs and system states are saved before power-down.
  22. System End: The robotic arm workstation completes its lifecycle.

Sensor Kits WorkFlow

  1. System Power on and Sensor Activation: The workstation boots and all connected sensor modules are powered and initialized.
  2. Sensor Data Acquisition: Each sensor module (temperature, humidity, gas, load, pH, etc.) begins collecting measurements from its environment.
  3. Data Transmission to Processing Unit: Sensor data is continuously transmitted to the Raspberry Pi (or central controller) through configured network ports.
  4. Port Listening and Packet Reception: The system listens on all defined communication ports to receive incoming sensor data packets.
  5. Raw Data Processing: Incoming raw sensor packets are extracted, formatted, and validated to ensure data integrity.
  6. Dashboard Data Parsing: Processed data is parsed into structured values and displayed on the workstation dashboard for real-time monitoring.
  7. Logging Status Check: The system checks whether data logging is currently active.
  8. Sensor Data Logging (If Logging Enabled): If logging is active, sensor readings are appended to a CSV file along with timestamps and duration information.
  9. User Upload Trigger: The upload workflow begins when the user initiates a data upload command.
  10. CSV Data Availability Check: The system verifies whether the CSV file contains valid data for upload.
  11. CSV to JSON Conversion (If Data Present): If data exists, the CSV dataset is converted into JSON format for structured transmission.
  12. Data Transmission to Remote Server: The JSON-formatted dataset is sent to the remote server (bodh scientific platform).
  13. Upload Status Evaluation: The system checks whether the upload process was successful.
  14. Upload Failure Handling (If failed): If the upload fails, an error message is displayed, and the user is given the option to retry.
  15. Upload Success Confirmation (If Successful): Upon successful upload, a confirmation message is shown to the user.
  16. Alert Generation: A system alert or notification is printed/logged to indicate successful data transfer.
  17. Process Completion: The upload cycle completes and the system remains ready for further monitoring or uploads.