I bet this won’t have an impact on memory safety and interop means C++ compilers have to be stricter about memory layout and reduce unspecified edge cases.
I bet this won’t have an impact on memory safety and interop means C++ compilers have to be stricter about memory layout and reduce unspecified edge cases.
The U.S destroying its own economy. Who could’ve asked for a better Christmas present? With Trump at the helm next year, it’s only a question of time before trade partners tell the U.S to fuck off and they stop ignoring decisions like these.
I’m actually surprised there is no specification. It’s how I thought languages were written: spec first, implementation later. Do RFCs serve this purpose?
That’s pretty cool, but terrifying as well. Can’t wait for somebody to go a step further and start writing proc macros (call it rusht
) to replace bash scripts with rust scripts. Actually, now that I think about it, not so terrifying. They can probably be debugged better, could be safer (unless someone starts publishing malicious proc macros), allow dependencies to be added to compose better scripts without relying on they system’s package manager, and so much more.
Eventually, painfully, slowly, we’ll move to memory-safe languages. It really is a good idea. Personally, though, I don’t expect it to happen this decade. In the 2030s? Yes, 2020s? No.
This. Unless the government starts introducing fines or financial incentives (like fines) to force the use of memory-safe languages, ain’t nothing gonna happen.
Maybe read the article…
Difficult? How so? I find compiling C and C++ stuff much more difficult than anything python. It never works on the first try whereas with python the chances are much much higher.
What’s is so difficult to understand about virtual envs? You have global python packages, you can also have per user python packages, and you can create virtual environments to install packages into. Why do people struggle to understand this?
The global packages are found thanks to default locations, which can be overridden with environment variables. Virtual environments set those environment variables to be able to point to different locations.
python -m venv .venv/
means python will execute the module venv
and tell it to create a virtual environment in the .venv
folder in the current directory. As mentioned above, the environment variables have to be set to actually use it. That’s when source .venv/bin/activate
comes into play (there are other scripts for zsh and fish).
Now you can run pip install $package
and then run the package’s command if it has one.
It’s that simple. If you want to, you can make it difficult by doing sudo pip install $package
and fucking up your global packages by possibly updating a dependency of another package - just like the equivalent of updating glibc from 1.2 to 1.3 and breaking every application depending on 1.2 because glibc doesn’t fucking follow goddamn semver.
As for old versions of python, bro give me a break. There’s pyenv for that if whatever old ass package you’re installing depends on an ancient 10 year old python version. You really think building a C++ package from 10 years ago will work more smoothly than python? Have fun tracking down all the unlocked dependency versions that “Worked On My Machine 🏧” at the start of the century.
The only python packages I have installing are those with C/C++ dependencies which have to be compiled at install time.
Y’all have got to be meme’ing.
Is that a problem with java? In fact, is it even a problem on github where repos are namespaced by user or org?
The bloody managers are the biggest problem. Most don’t understand code much less the process of making a software product. They force you into idiotic meetings where they want to change how things work because they “don’t have visibility into the process” which just translated to “I don’t understand what you’re doing”.
Also trying to force people who love machines but people less so into leading people is a recipe for unhappiness.
But at least the bozos at the top get to make the decisions and the cheddar for being ignorant and not listening.
I’d very much welcome a crates.io alternative that doesn’t require github and supports namespacing by username or org. The dependency on a proprietary platform rubs me the wrong way.
This is a great initiative and I wish there were more orgs that did this. However, I’m now convinced that we need opensource licenses which stipulate remuneration when used for financial gain.
Thank you. That’s good to know. In my OS architecture lectures, we were introduced to an OS with core bound threads. I can’t remember if it was a learning OS or something that really existed, hence my doubts.
IINM whether it’s “true” parallelism depends on the number of hardware cores (which shouldn’t be a problem nowadays). A single, physical core means concurrency (even with “hyper threading”) and multiple cores could mean parallelism. I can’t remember if threads are core bound or not. Processes can bound to cores on linux (on other OSes too most likely).
So I suppose this is the preferred way to do concurrency, there is no async/await
Python does have async which is syntax sugar for coroutines to be run in threads or processes using an executor (doc). The standard library has asyncio which describes valuable usecases for async/await in python.
and you won’t use At “just” for a bit of concurrency. Right ?
Is “At” a typo?
We learn a little bit everyday. Thanks!
You’re welcome :) I discovered the GIL the hard way unfortunately. Making another person aware of its existence to potentially save them some pain is worth it.
Does it also support writing compiling or exporting to python modules?
Python has a Global Interpreter Lock (GIL) which has been a bane and a boon. A boon because many basic types are thread-safe as actions happen in lock step. A bane because despite having multiple threads, there’s still a master coordinating them all, which means there is no parallelism but concurrency. Python 3.13 allows disabling the GIL, but I cannot say much to that since I haven’t tested it myself. Most likely it means nothing is really thread safe anymore and it’s up to the developer to handle that.
So, in Python, using multiple threads is not a surefire way to have a performance boost. Small tasks that don’t require many operations are OK for threading, but many cycles may be lost to the GIL. Using it for I/O bound stuff is good though as the main python thread won’t be stuck waiting on those things to complete (reading or writing files, network access, screen access, …) . Larger tasks with more operations that are I/O bound or require parallelism (encoding a video file, processing multiple large files at once, reading large amounts of data from the network, …) are better as separate processes.
As an example: if you have one large file to read then split out into multiple small files, threads are a good option. Splitting happens sequentially, but writing to disk is (comparatively) slow task that one shouldn’t wait on and can be dedicated to a thread. Doing these operations on multiple large files is worth doing in parallel using multiple processes. Each process will read a file, split it, and write in threads, while one master process orchestrates the slave processes.
Of course, your mileage may vary. I’ve run into the issue of requiring parallelism on small tasks and the only thing that worked was moving out that logic to a cython and outside the GIL (terrible experience). For small, highly parallel operations, probably Python isn’t the right language and something like Rust should be explored.
Even the creators of languages don’t know their own languages 100%. I wouldn’t even call them the limit. So, I’m good enough in my main language that a lot of code doesn’t surprise me. And I try very hard to write code that others can understand as well when in a team.
You know, if they used the PR workflow with a CI that enforced standardised commit messages, this could be quite easily solved? Forcing everything through a mailing list seems to create more work for maintainers…
Not sure I understand this. Is it a watch clippy
? Or a completely new tool? If it’s new, what does it provide over clippy?
Your post is nearly the epitome of Chesterton’s Fence. You don’t seem to understand why Rust looks the way it does, works the way it does, why it exists, what it’s used for, and what problems it solves, but you’re very happy (or not, which is probably why you wrote this post) to trash it.
There are many responses to your comments that explain things quite well, yet, from what I see, you do not seem to concentrate on those.
And what I quoted is just the icing on top. It looks very much like you have one style of programming and approaching problems (the PHP style of “if it runs, it’s good”) and apply it to every problem. You have used a hammer your whole life and every problem looks like a nail. You can build a good many things with duct tape, nails, and a hammer. It might all do the job well enough for your standards or purposes and at times it might even be the perfect tool for a task.
But now you’ve discovered a screw driver, tried to hammer in a nail, and gotten quite frustrated that it didn’t work well. Instead of considering using a screw, you have tossed aside the screwdriver and decided to yell expletives into the ether.
The ether has responded with explanations, but you have chosen to ignore them all and staunchly hold on to your “screwdrivers are shit” conclusion. Had you said “I’m just blowing off steam, don’t take this seriously”, that’s what it would’ve been. However, you seem quite serious. Or, as I said before, you’re just trolling.
I weep for the time lost on debugging with println. Good grief. It’s like having access to a time stopping ability and going “nah, I like trying to add a marker and tracing footsteps”.
Yes, for multi threaded workloads there aren’t many options, but most are single threaded and eschewing a debugger is bonkers to me.
Anti Commercial-AI license