Full Time
US
India
Role
As a Senior Backend Developer at Nasiko, you will design, build, and optimize high-performance backend systems that power our distributed AI ecosystem. You will collaborate closely with ML engineers, platform teams, and infrastructure groups to deploy AI-powered services, scale intelligent workflows, and serve models efficiently across diverse environments. Your work will be central to Nasiko’s mission of enabling robust, interoperable, and adaptive AI coordination..
Core Responsibilities
Develop and maintain backend systems, microservices, and APIs using Golang.
Architect and deploy distributed infrastructure for high-throughput AI workloads.
Collaborate with ML engineers to integrate trained models into scalable production environments.
Build and maintain real-time and batch inference pipelines for AI-driven tasks.
Contribute to DevOps and MLOps workflows, including CI/CD automation, observability, monitoring, and autoscaling.
Ensure high availability, performance, and security across backend deployments.
Optimize internal tooling, model-serving infrastructure, and large-scale system performance.
Skills and Qualifications
Required:
Strong professional experience with Golang in production-grade systems.
Deep understanding of distributed systems, concurrency models, and network programming.
Familiarity with ML frameworks such as PyTorch, TensorFlow, or JAX.
Hands-on experience with model-serving platforms like Triton Inference Server, TorchServe, ONNX Runtime, or Ray Serve.
Proficiency in containerization (Docker) and orchestration (Kubernetes).
Experience with MLOps tools such as MLflow, Weights & Biases, Kubeflow, or Airflow.
Strong grasp of cloud platforms (AWS, GCP, Azure) and infrastructure-as-code.
Experience with SQL/NoSQL databases, caching systems (Redis), and message queues (Kafka, RabbitMQ).
Proficiency in Git and modern GitHub/GitLab workflows.
Bachelor’s degree in Computer Science, Engineering, or related fields is a plus.
Nice to Have:
Experience working with LLMs or foundation model architectures.
Familiarity with vector databases such as FAISS, Pinecone, or Weaviate.
Exposure to LLM orchestration frameworks (LangChain, LlamaIndex, Haystack).
Experience with graph processing, streaming systems, or real-time inference platforms.
Interest in AI evaluation, interpretability, or safety methodologies.
Why Join Us?
Build state-of-the-art infrastructure powering applied AI at scale.
Collaborate with a world-class team of engineers, scientists, and researchers.
Contribute to open-source tooling and research-driven development.
Competitive compensation with a dynamic, in-office culture in Bangalore.