• Sure, and we use those, like numpy, scipy, and tensorflow. Python is best when gluing libraries together, so the more you can get out of those libraries, the better.

    Python isn’t fast, but it’s usually fast enough to shuffle data from one library to the next.

    • @hark@lemmy.world
      link
      fedilink
      414 days ago

      Usually, but when it isn’t then you’ve got a bottleneck. Multithreaded performance is a major weak point if you need to do any processing that isn’t handled by one of the libraries.

      • @sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        2
        edit-2
        14 days ago

        Then you need to break up your problem into processes. Python doesn’t really do multi-threading (hopefully that changes with the GIL going away), but most things can scale reasonably well in a process pool if you manage the worker queue properly (e.g. RabbitMQ works well).

        It’s not as good as proper threadimg, but it’s a lot simpler and easier to scale horizontally. You can later rewrite certain parts if hosting costs become a larger issue than dev costs.

        • @hark@lemmy.world
          link
          fedilink
          314 days ago

          A process pool means extra copying of data around which incurs a huge cost and this is made worse by the tendency for parallel-processing-friendly workloads often consisting of large amounts of data.

          • Yup, which is why you should try to limit the copying by designing your parallel processing algorithm around it. If you can’t, you would handle threading with a native library or something and scale vertical instead of horizontal. Or pick a different language if it’s a huge part of your app.

            But in a lot of cases, it’s reasonable to stick with Python and scale horizontally. That has value if you’re otherwise a Python shop.