We consider computing systems that partition jobs into tasks, add redundancy through coding, and assign the encoded tasks to different computing nodes for parallel execution. The expected execution time depends on the level of redundancy. The computing nodes execute large jobs in batches of tasks. We show that the expected execution time depends on the batch size as well. The optimal batch size that minimizes the execution time depends on the level of redundancy under a fixed number of parallel servers and other system parameters. Furthermore, we show how to (jointly) optimize the redundancy level and batch size to reduce the expected job completion time for two service-time distributions. The simulation presented helps us appreciate the claims.