When deploying a model, which technique is suitable for processing data in real-time with minimal latency?
Batch inference
Real-time inference
Overlook minor misbehaviors
Impose harsh punishments for any infraction

Cloud Artificial Intelligence and Machine Learning Exercises are loading ...