• People's Choice
      • Back
      • Consulting
          • Back
          • J2EE
              • Back
              • Websphere
          • Collabortation
              • Back
              • IBM Connections
          • OpenSource
          • Kontakt
      • build:skills
          • Back
          • Colaboration
              • Back
              • Admin
                  • Back
                  • W-A-S
                  • WPS
              • AppDev
                  • Back
                  • W-A-S
                  • WPS
                  • Web Experience Factory
          • Kontakt
          • Notes/ Verse
              • Back
              • Admin
              • Development
              • Interfaces
          • OpenSource
          • Literatur
          • Schedules
      • Schedule
      • Cloud
          • Back
          • Container
  • Jobs
      • Back
      • Offers
  • Über uns
  • Support
      • Back
      • FAQs
          • Back
          • Groupware
          • Traveler
          • WebSphere
          • Office
          • OpenSource
          • Other
      • Sonstiges
          • Back
          • Meldungen
          • IBM Infos
          • Lotus
          • WebSphere
          • Redbooks
          • Docker
          • Kubernetes
      • News
          • Back
          • Domino
          • Traveler
          • WebSphere
          • WebSphere Portal
          • Connections
          • Sametime
          • Docker
          • Kubernetes
      • Download
          • Back
          • WebSphere
          • Notes
          • Other
      • Discussion
  • Log in
Entwicklungsbuch
Featured imagevLLM has quickly become the go-to inference engine for developers who need high-throughput LLM serving.We brought vLLM to Docker Model Runner for NVIDIA GPUs on Linux, then extended it to Windows via WSL2. That changes today.Docker Model Runner now supports vllm-metal, a new backend that brings vLLM inference to macOS using Apple Silicon’s Metal GPU.If you have a Mac with an M-series chip, you can now run MLX models through vLLM with the same OpenAI-compatible API, same

Just published by Docker: Read more

  • Nächster Beitrag: Open WebUI + Docker Model Runner: Self-Hosted Models, Zero Configuration Weiter
© 1999 - 2026 IT Knäpper
  • Nutzungsbedingungen und Disclaimer
  • |
  • Unsere Philosophie
  • |
  • Datenschutz
  • |
  • WIR
Back to top