Sarcouncil Journal of Applied Sciences Aims & Scope

Sarcouncil Journal of Applied Sciences
An Open access peer reviewed international Journal
Publication Frequency- Monthly
Publisher Name-SARC Publisher
ISSN Online- 2945-3437
Country of origin-PHILIPPINES
Impact Factor- 3.78, ICV-64
Language- English
Keywords
- Biology, chemistry, physics, Environmental, business, economics, Plant-microbe Interactions, PostHarvest Biology.
Editors

Dr Ifeoma Christy
Associate Editor
Sarcouncil Journal of Entrepreneurship And Business Management

Dr Hazim Abdul-Rahman
Associate Editor
Sarcouncil Journal of Applied Sciences

Entessar Al Jbawi
Associate Editor
Sarcouncil Journal of Multidisciplinary

Rishabh Rajesh Shanbhag
Associate Editor
Sarcouncil Journal of Engineering and Computer Sciences

Dr Md. Rezowan ur Rahman
Associate Editor
Sarcouncil Journal of Biomedical Sciences

Dr Ifeoma Christy
Associate Editor
Sarcouncil Journal of Entrepreneurship And Business Management

Dr Hazim Abdul-Rahman
Associate Editor
Sarcouncil Journal of Applied Sciences
The Role of Cloud-Native Architectures in Accelerating Machine Learning Workflows through Data Engineering Innovations
Keywords: Cloud-native architectures, machine learning workflows, serverless computing, data engineering, scalability, inference latency, cost optimization, microservices, AI deployment, automation
Abstract: The rapid advancement of machine learning (ML) necessitates scalable, efficient, and cost-effective computing environments. Traditional ML workflows often face challenges related to long training times, high inference latency, infrastructure costs, and scalability limitations. This study explores the role of cloud-native architectures in accelerating ML workflows through data engineering innovations. By leveraging microservices, containerization, serverless computing, and automated data pipelines, cloud-native environments optimize ML operations while reducing computational overhead. The results indicate a 50% reduction in training time and inference latency, 50-55% cost savings, and a threefold increase in scalability compared to traditional ML implementations. Moreover, cloud-native solutions enhance fault tolerance by reducing system recovery time by 80%, ensuring greater reliability for real-time AI applications. Statistical analyses, including regression modeling, survival analysis, and PCA, confirm the efficiency gains of cloud-based ML workflows. The findings suggest that organizations adopting cloud-native ML architectures can achieve faster model deployment, reduced infrastructure costs, and enhanced system resilience. As cloud-native computing evolves, its integration with machine learning will play a pivotal role in shaping future AI-driven solutions
Author
- Karthik Puthraya
- Software Engineer at Netflix
- Rachit Gupta
- Senior Architect at Guardian Life
- Beverly DSouza
- Data Engineering at Patreon