Yeah and Linux is waaay behind in other areas. Windows had a secure attention sequence (ctrl-alt-del to login) for several decades now. Linux still doesn't.
Linux (well, more accurately, X11), has had a SAK for ages now, in the form of the CTRL+ALT+BACKSPACE that immediately kills X11, booting you back to the login screen.
I personally doubt SAK/SAS is a good security measure anyways. If you've got untrusted programs running on your machine, you're probably already pwn'd.
There are many a ways to disable CTRL+ALT+DEL on windows too, from registry tricks to group policy options. Overall, SAK seems to be a relic of the past that should be kept far away from any security consideration.
The "threat model" (if anyone even called it that) of applications back then was bugs resulting in unintended spin-locks, and the user not realizing they're critically short on RAM or disk space.
This setup came from the era of Windows running basically everything as administrator or something close to it.
The whole windows ecosystem had us trained to right click on any Windows 9X/XP program that wasn’t working right and “run as administrator” to get it to work in Vista/7.
The more powerful form is the UAC full privilege escalation dance that Win 7+(?) does, which is a surprisingly elegant UX solution.
1. Snapshot the desktop
2. Switch to a separate secure UI session
3. Display the snapshot in the background, greyed out, with the UAC prompt running in the current session and topmost
It avoids any chance of a user-space program faking or interacting with a UAC window.
Clever way of dealing with the train wreck of legacy Windows user/program permissioning.
My only experience with non-UAC endpoint privilege management was BeyondTrust and it seemed to try to do what UAC did but with a worse user experience. It looks like the Intune EPM offering also doesn't present as clear a delineation as UAC, which seems like a missed opportunity.
One of the things Windows did right, IMO. I hate that elevation prompts on macOS and most linux desktops are indistinguishable from any other window.
It's not just visual either. The secure desktop is in protected memory, and no other process can access it. Only NTAUTHORITY\System can initiate showing it and interact with it any way, no other process can.
You can also configure it to require you to press CTRL+ALT+DEL on the UAC prompt to be able to interact with it and enter credentials as another safeguard against spoofing.
I'm not even sure if Wayland supports doing something like that.
It made a lot more sense in the bygone years of users casually downloading and running exe's to get more AIM "smilies", or putting in a floppy disk or CD and having the system autoexec whatever malware the last user of that disk had. It was the expected norm for everybody's computer to be an absolute mess.
These days, things have gotten far more reasonable, and I think we can generally expect a linux desktop user to only run software from trusted sources. In this context, such a feature makes much less sense.
It's useful for shared spaces like schools, universities and internet cafes. The point is that without it you can display a fake login screen and gather people's passwords.
I actually wrote a fake version of RMNet login when I was in school (before Windows added ctrl-alt-del to login).
This indeed is not Algol (or rather C) heritage, but Fortran heritage, not memory offsets but indices in mathematical formulae. This is why R and Julia also have 1-based indexing.
Lots of systems I grew up with were 1-indexed and there's nothing wrong with it. In the context of history, C is the anomaly.
I learned the Wirth languages first (and then later did a lot of programming in MOO, a prototype OO 1-indexed scripting language). Because of that early experience I still slip up and make off by 1 errors occasionally w/ 0 indexed languages.
(Actually both Modula-2 and Ada aren't strictly 1 indexed since you can redefine the indexing range.)
It's fine, I can see the advantages. I just think it's a weird level of blindness to act like 1 indexing is some sort of aberration. It's really not. It's actually quite friendly for new or casual programmers, for one.
I think the objection is not so much blindness as the idea that professional tools should not generally be tailored to the needs of new or casual users at the expense of experienced users.
As I understand it Julia changed course and is attempting to support arbitrary index ranges, a feature which Fortran enjoys. (I'm not clear on the details as I don't use either of them.)
Pascal, frankly, allowed to index arrays by any enumerable type; you could use Natural (1-based), or could use 0..whatever. Same with Modula-2; writing it, I freely used 0-based indexing when I wanted to interact with hardware where it made sense, and 1-based indexes when I wanted to implement some math formula.
> Lots of systems I grew up with were 1-indexed and there's nothing wrong with it. In the context of history, C is the anomaly.
The problem is that Lua is effectively an embedded language for C.
If Lua never interacted with C, 1-based indexing would merely be a weird quirk. Because you are constantly shifting across the C/Lua barrier, 1-based indices becomes a disaster.
That example only shows the opposite of what it sounds like you’re saying, although you could be getting at a few different true things. Anyway:
- Every property access in JavaScript is semantically coerced to a string (or a symbol, as of ES6). All property keys are semantically either strings or symbols.
- Property names that are the ToString() of a 31-bit unsigned integer are considered indexes for the purposes of the following two behaviours:
- For arrays, indexes are the elements of the array. They’re the properties that can affect its `length` and are acted on by array methods.
- Indexes are ordered in numeric order before other properties. Other properties are in creation order. (In some even nicher cases, property order is implementation-defined.)
{ let a = {}; a['1'] = 5; a['0'] = 6; Object.keys(a) }
// ['0', '1']
{ let a = {}; a['1'] = 5; a['00'] = 6; Object.keys(a) }
// ['1', '00']
There's nothing wrong with 1-based indexing. The only reason it seems wrong to you is because you're familiar with 0-based, not because it's inherently worse.
> Debian is kind of slow in adapting to the modern world.
Yeah definitely. I guess this is a result of their weird idea that they have to own the entire world. Every bit of open source Linux software ever made must be in Debian.
If you have to upgrade the entire world it's going to take a while...
https://wiki.debian.org/UsingQuilt but the short form is that you keep the original sources untouched, then as part of building the package, you apply everything in a `debian/patches` directory, do the build, and then revert them. Sort of an extreme version of "clearly labelled changes" - but tedious to work with since you need to apply, change and test, then stuff the changes back into diff form (the quilt tool uses a push/pop mechanism, so this isn't entirely mad.)
Yea, so? Debian goes back 32 or more years, and quilt dates to approximately the same time. It’s probably just a year or two younger than Debian.
At Mozilla some developers used quilt for local development back when the Mozilla Suite source code was kept in a CVS repository. CVS had terrible support for branches. Creating a branch required writing to each individual ,v file on the server (and there was one for every file that had existed in the repository, plus more for the ones that had been deleted). It was so slow that it basically prevented anyone from committing anything for hours while it happened (because otherwise the branch wouldn’t necessarily get a consistent set of versions across the commit), so feature branches were effectively impossible. Instead, some developers used quilt to make stacks of patches that they shared amongst their group when they were working on larger features.
Personally I didn’t really see the benefit back then. I was only just starting my career, fresh out of university, and hadn’t actually worked on any features large enough to require months of work, multiple rounds of review, or even multiple smaller commits that you would rebase and apply fixups to. All I could see back then were the hoops that those guys were jumping through. The hoops were real, but so were the benefits.
Quilt is difficult to maintain, but a quilt-like workflow? Easy: it's just a branch with all patches as commits. You can re-apply those to new releases of the upstream by using `git rebase --onto $new_upstream_commit_tag_or_branch`.
By having a naming convention for your tags and branches, then you can always identify the upstream "base" upon which the Debian "patches" are based, and then you can trivially use `git log` to list them.
Really, Git has a solution to this. If you insist that it doesn't without looking, you'll just keep re-inventing the wheel badly.
Do you ever really want this? I don't recall wanting this. But you can still get this: just list the ${base_ref}..${deb_ref} commit ranges, select the commit you want, and diff the `git show` of the selected commits. It helps here to keep the commit synopsis the same.
E.g.,
c0=$(git log --oneline ${base_ref0}..${deb_ref0} |
grep "^[^ ] The subject in question" |
cut -d' ' -f1)
c1=$(git log --oneline ${base_ref1}..${deb_ref1} |
grep "^[^ ] The subject in question" |
cut -d' ' -f1)
if [[ -z $c0 || -z $c1 ]]; then
echo "Error: commits not found"
else
diff -ubw <(git show $c0) <(git show c1)
fi
See also the above commentary about Gerrit and commit IDs.
(Honestly I don't need commit IDs. What happens if I eventually split a commit in a patch series into two? Which one, if either, gets the old commit ID? So I just don't bother.)
People keep saying “just use Git commits” without understanding the advantages of the Quilt approach. There are tools to keep patches as Git commits that solve this, but “just Git commits” do not.
This seems pretty silly to me. Their solution for how do get structured output is pretty much just "don't". Well we still need the structured output so what do we do then?
> you need a parser that can find JSON in your output and, when working with non-frontier models, can handle unquoted strings, key-value pairs without comma delimiters, unescaped quotes and newlines; and you need a parser that can coerce the JSON into your output schema, if the model, say, returns a float where you wanted an int, or a string where you wanted a string[].
Oh cool I'm sure that will be really reliably. Facepalm.
> Allow it to respond in a free-form style: let it refuse to count the number of entries in a list, let it warn you when you've given it contradictory information, let it tell you the correct approach when you inadvertently ask it to use the wrong approach
This makes zero sense. The whole point of structured output is that it's a (non-AI) program reading it. That program needs JSON input with a given schema. If it is able to handle contradictory-information warnings, or being told you're using the wrong approach then that will be in the schema anyway!
I think the point about thinking models is interesting, but the solution to that is obviously to allow it to think without the structuring constraint, and then feed the output from that into a query with the structured output constraint.
It turns out it is worth the effort. Once you have got past the "fighting the borrow checker" (which isn't nearly as bad as it used to be thanks to improvements to its abilities), you get some significant benefits:
* Strong ML-style type system that vastly reduces the chance of bugs (and hence the time spent writing tests and debugging).
* The borrow checker really wants you to have an ownership tree which it turns out is a really good way to avoid spaghetti code. It's like a no-spaghetti enforcer. It's not perfect of course and sometimes you do need non-tree ownership but overall it tends to make programs more reliable, again reducing debugging and test-writing time.
So it's more effort to write the code to the point that it will compile/run at all. But once you've done that you're usually basically done.
Some other languages have these properties (especially FP languages), but they come with a whole load of other baggage and much smaller ecosystems.
> So it's more effort to write the code to the point that it will compile/run at all. But once you've done that you're usually basically done.
Not if I don't know what I'm doing because it's something new. The way I'm learning how to do it is by building it. So I want to build it quickly so that I can get in more feedback loops as I learn. Also I want to learn by example, so I actually want to get runtime errors, not type system errors. Later when I do know what I am doing then, sure, I want to encode as much as I can in my types. But before that .. Don't get in my way!
Yeah it is a fair point that runtime errors are sometimes easier to understand than compile time errors. They're still a much worse option of course - for the many reasons that have been already discussed - but maybe compile-time errors could be improved by providing an example of the kind of runtime error you could get if you didn't fix it (and it hypothetically was dynamically typed). Perhaps that would be easier to understand for some people or some errors.
There's a (Curry-Howard) analogue here with formal verification and counter-examples.
Then again I understood exactly what it was saying every time, which is more than I can say for some of the other traffic on that recording. I’m not sure synthetic-sounding means bad here.
The embedded systems qualified for use in general aviation avionics have very limited hardware resources. They are severely constrained by form factor, power, and cooling. It's amazing that the developers were able to get speech synthesis working so well.
This, if it sounds too human ATC is going to try to help and possibly provide vectors, as they should, but The way the system works, ATC needs to be prioritizing clearing the runway and keeping aircraft away
You can't drive at 12 though surely? And you have to account for the fact that young people are going to be more likely to die in crashes and more likely to use weed.
This kind of test seems silly. It's going to be far too hard to remove the confounding variables. Much easier just to give people different levels of weed and have them do driving tests. Directly measure their driving skill instead of doing it by shitty proxy like this.
>This kind of test seems silly. It's going to be far too hard to remove the confounding variables. Much easier just to give people different levels of weed and have them do driving tests. Directly measure their driving skill instead of doing it by shitty proxy like this.
Given that differing levels of THC impact people differently both because of potential "tolerance" in frequent users as compared with occasional users andindividual responses to cannabis (and even different cannabis strains with varied chemical profiles). There may well be other confounding factors as well.
Cannabis does not affect everyone the same way. It doesn't even affect the same people in the same way every time.
As such, while the testing you suggest may well be useful over the long term, it will require large populations and repeated testing at varying levels of both subjective intoxication and THC levels in the blood over extended periods to get good data about how THC use (both in temporal proximity and overall usage patterns) causes impairment.
As anecdata, I can absolutely say that lower levels of THC consumption results in much more impairment if cannabis hasn't been used recently and higher levels result in less impairment if there has been recent use.
That's not to say that driving (or any high-risk activity) is appropriate while actually high. It is not. Driving while impaired (by anything) is a terrible idea.
reply