Benchmark results for LLM inference on Apple MLX