Just trying not so confuse realistic testing with self-deception :) Not convinced testing with synthetic data can pretend to be similar to a production environment.
On the Fediverse also as @mapto@qoto.org
Можете да намерите и като @mapto@masto.bg
Abito in Italia @mapto@feddit.it
Just trying not so confuse realistic testing with self-deception :) Not convinced testing with synthetic data can pretend to be similar to a production environment.
It is not realistic to replicate a production setup in development when you’re working with sensitive user data. I’ve worked in different contexts (law enforcement, healthcare, financial services) where we’ve had complicated setups (in one instance including a thing called pre-staging environment), but never would a sizeable team of developers have access to user data, and thus to a realistic setup in terms of size, let alone of quality of data.
I’m sorry, but doesn’t sound very convincing. The strongest (reiterated) argument is “venv is standard”, but so is docker.
Looks exciting, and the basic example in the user guide seems more intuitive than pandas. Looking forward to see how it’s going to integrate with bokeh and plotnine, though.
Upfront analysis and design is very close to independent from the technology, particularly at the I/O level
Q: what do we do? A: profile and decompose. Should not be that distant as a thought
Definitely my preference. However, for someone just starting (and not used to pressing TAB or calling help() ), an empty prompt might be intimidating.
That’s why I typically suggest interactive tutorials, e.g. any of these two: https://www.learnpython.org/en/Hello%2C_World! https://futurecoder.io/course/#IntroducingTheShell
If you do that, nothing will actually be checked. You need to explicitly run
pyright
in CI.
Are you suggesting that you prefer to do the type validation upon execution? I’d like to have the checks done beforehand, be it in the IDE during coding or in CI. This way the feedback loop is shorter.
Then, backwards compatibility is a big thing in python, unlike node. So when typehints were introduced in 3.5 with PEP 484, they had to be optional.
At least Typescript defines the semantics of its type hints. Python only defines the syntax! You can have multiple type checkers that conflict with each other!
It is a bit more complicated than that. Here’s a quote the above-mentioned PEP (3.5 was back in 2015, we’re at 3.12 now and typehints have evolved):
Note that this PEP still explicitly does NOT prevent other uses of annotations, nor does it require (or forbid) any particular processing of annotations, even when they conform to this specification. It simply enables better coordination, as PEP 333 did for web frameworks.
Have you looked at this one? https://pypi.org/project/onboot/
I guess the answer at this point in time is: it allows you to define the function replacements that matter to you in pnk.lang. But if so, ksh is not a first choice for maintainable code.
So it boils down to: can it “transpile” (transpret rather) its own code?
Even looking into the readme and pink.lang, I’m still unsure what this does. I can imagine, but one single example would be nice. Bonus points if it is actually something useful
On the readme in GitHub it appears that “any” excludes MySQL and SQLite as destinations, and this among the dozen or so DBMS they care to list
To me it depends on the base image. Some don’t have curl, but have wget. I would go with the flow instead of installing it myself. Especially if I can get away with not having to add more layers for an image of my own and/or using the same command for all containers
Ok, that was stupid. Doing healthcheck with wget, does what wget does: it downloads the result. I had to add --spider to stop doing that
wget -nv --spider http://localhost:8000 || exit 1
Well, I do need OpenAPI (Swagger). What I don’t need is the generation of thousands of equal static files. Out of all these generated files, index.html would’ve been enough and I don’t need index.html.1, etc.
https://github.com/pgp/XFiles is what I’ve been using and am pretty happy about it
In both XML and JSON you have lists and embedding hierarchichies (I use this term to abstract away from dictionaries/maps which are not exactly represented in XML). These allow for browsing/iterating and filtering when after a particular node.
One difference is that nodes in XML are named (tags). Another thing that you have in XML and not in JSON is attributes. A good example of their use is querying by tag name, node id or class attributes in HTML (which is a loose example of XML). To do the equivalent in JSON, you need to work with keys and values which are less structured and (arguably as consequence) often missing such meta-data. HTML is a popular example, but pretty much any XML has ids and other meta tags and attributes. JSON standards typically don’t and it’s a long separate topic whether this is due to the characteristics of the format itself.
PS: another big difference is that XML also allows for comments, which allows to also encode intent, not only content.
Actually XPATH is arguably more flexible than JSON. There’s also jsonpath, but I don’t think I’ve seen it meaningfully used
I didn’t realise. Was not paywalled for me on the phone.