Generative AI Architect (Big Data & Java Expertise)
Hybrid role
New Brunswick, NJ
We are seeking a skilled and innovative Generative AI Architect with a strong background in Big Data and Java to lead our AI-driven initiatives. The ideal candidate will design and implement cutting-edge generative AI models and systems that leverage large datasets, while ensuring seamless integration with Java-based platforms. This role requires a deep understanding of AI/ML technologies, data architecture, and software engineering to drive the next generation of AI-powered solutions.
Key Responsibilities:
Lead the design and architecture of generative AI models and systems that utilize large-scale data. Collaborate with data scientists, ML engineers, and product teams to create scalable and efficient AI-driven platforms.
Leverage Big Data technologies (Hadoop, Spark, etc.) to handle massive datasets and ensure efficient data ingestion, processing, and storage for AI applications.
Architect, develop, and integrate AI models into Java-based enterprise applications, ensuring smooth operation within existing systems.
Work with teams to develop generative AI models (e.g., GPT, GANs) and optimize them for performance, scalability, and business needs. Ensure models are trained on relevant datasets and continuously updated.
Collaborate with cross-functional teams, including data engineering, software development, and business stakeholders, to drive AI initiatives. Mentor junior engineers and provide guidance on AI and Big Data architecture.
Architect and deploy AI systems in cloud environments (AWS, GCP, Azure) and ensure robust infrastructure for AI model deployment and scalability.
Stay updated with the latest trends and advancements in AI, Big Data, and Java technologies, applying this knowledge to enhance systems and processes.
Required Qualifications:
8+ years of experience in software engineering, AI, or data architecture roles.
Proven experience in designing and deploying AI models, particularly generative AI (e.g., GPT, GANs, VAEs).
Strong background in Big Data technologies such as Hadoop, Spark, Kafka, and NoSQL databases.
Extensive hands-on experience with Java, including multithreading, JVM tuning, and integration with large-scale systems.
Experience with cloud platforms (AWS, GCP, Azure) for AI model deployment and scaling.
Expertise in machine learning frameworks (TensorFlow, PyTorch, Keras).
Strong understanding of data pipelines, ETL processes, and distributed systems.
Proficiency in AI algorithms and techniques (e.g., NLP, computer vision, deep learning).
Knowledge of microservices architecture, RESTful APIs, and containerization (Docker, Kubernetes).
Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field.
Please submit your resume on LinkedIn (or) you can share your resume with