But then imagine once in a while they have to go out to work, they might have to take this along with them and then both laptop and the sticky note together are vulnerable for either accidental theft or intentional.
"Just" is used in chess commentary frequently and usually a bit too flippantly as the speaker hasn't gone through all the necessary calculations as the players have to. Sam Shankland mentions this. https://youtu.be/GbFgmXqVLl8?t=1069;https://streamable.com/yuglrw
I haven't used this for large deployments but I use it for my personal server and it works perfectly. Almost everything is built in and I can easily write my own custom operations when I need them. Documentation is good and the operations are well designed.
Only downside is I couldn't make it work with my SSH agent, but that might be a problem with Paramiko and not Pyinfra.
Having played around with Clojure and Scheme for a while (but never got too serious), I always thought homoiconicity and macros were extremely cool concepts, but I never actually ran into a need for them in my everyday work.
>Now, if we agree that the ability to manipulate source code is important to us, what kind of languages are most conducive for supporting it?
This is useful for compiler programmers, or maybe also those writing source code analyzers/optimizers, but is that it? On occasion I have had to write DSLs for the user input, but in these cases the (non-programmer) users didn't want to write Lisp so I used something like Haskell's parsec to parse the data.
The remote code example given in the post is compelling, but again seems a bit niche. I don't doubt that it's sometimes useful but is it reason enough to choose the language? Are there examples of real-life (non-compilers-related) Lisp programs that show the power of homoiconicity?
Same goes with the concept of "being a guest" in the programming language. I have never wanted to change "constant" to "c". Probably I'm not imaginative enough, but this has never really been an issue for me. Perhaps it secretly has though, and some of my problems have been "being a guest" in disguise.
In my experience, macros in lisp codebases are mainly abstracting away common boilerplate patterns. Common Lisp has WITH-OPEN-FILE to open a file and make sure it closes if the stack unwinds. This is a macro based on the UNWIND-PROTECT primitive, which ensures execution of a form if the stack unwinds. Many projects will have various with-foo macros to express this pattern for arbitray things that are useful in the context of that project(though not all with- macros need UNWIND-PROTECT). Another example is custom loops that happen over and over.
Let's say I'm writing a chess engine. I regularly want to iterate over the legal moves in a position. So I might make a macro so I can write
(do-moves (mv pos) ...)
I find that because doing this is so simple, well written lisp codebases tend to be pretty easy to read. There's less of that learning curve in a new codebase of getting used to the codebase's particular boilerplate rituals, and learning to pick out the code that's interesting. In a good lisp codebase all the ritual is hidden away in self-documenting macros.
Of course this can get taken too far and then it becomes nightmarish to understand the codebase because so much stuff is in various complex macros that it's hard to tell where the shit goes down.
I'm not a JS programmer, but I didn't find that code terribly readable. And from a performance point of view, I would not want to copy the board state n times, though that's beside the point. It's a bit of a contrived example of course. But what this looks like in a real engine would be something like(no language in particular):
pseudo_legal = move_generator(pos, PSEUDOLEGAL);
while true {
mv = pseudo_legal.next();
if !mv {
break;
}
if mv == tt.excluded_move || !pos.legal(mv){
continue;
}
pos.make_move(mv);
...
pos.unmake_move();
}
The pseudolegal stuff and all that has to do with various performance considerations. This is roughly what Stockfish' move loop looks like. It looks overly verbose because abstracting away these things with functions adds runtime overhead.
With macros you can have your cake and eat it too. While it's technically true you're passing around unevaluated code, none of this is happening at runtime but at compile-time(technically macro-expansion time but that's beyond the scope of this comment). Think of it as the ability to inline pretty much anything you want.
In this particular example, there's little to none: truth be told, I don't like WITH-FOO as a good example of why macros are fun. That's because WITH-FOO macros are commonly implemented/implementable as simple wrappers around CALL-WITH-FOO functions.
For instance, if we had a CALL-WITH-OPEN-FILE function (it's not too hard to write one yourself!), the following two syntaxes - one macro, one functional - would be equivalent:
Notice that both of these accept a pathname, additional arguments passed to OPEN, and code to be called with the opened stream. The only differences are that the functional variant needs to have its code forms wrapped in an anonymous function and that the function object passed to it is replaceable (since it's just a value).
---------------
For a more serious example, try thinking of how to implement something like CL:LOOP (a complex iteration construct) without a macro system.
Sure, LOOP is a very complex macro. But my point was that most macros in real codebases are these simple boilerplate wrappers that help readability.
I don't like codebases too full of DSLs, necessarily.
A less trivial example is defining different types of subroutine. In StumpWM, a tiling WM written in common lisp, there is the concept of commands. They're functions, but they executed as a string. "command arg1 arg2". And these strings can be bound to keys. But args might be numbers, windows, frames, strings etc.
Commands are defined through a defcommand macro. It takes types! And there's a macro for defining arbitrary types and how to parse them from a string. A command is actually a function under the hood, with a bunch of boilerplate to: parse the arguments, stick the name in a hash table, call pre- and post-command hooks, set some dynamic bindings. and so on. Defcommand abstracts this away and you can just write it just like a normal Lisp function except for the types.
Well there's an old debate about macros and functions, especially when you have closures, since closures can also delay assembly of bits of logic (half the work macros do, the rest being actual syntactic processing when people do that).
You have to understand too that mainstream languages didn't have closure until very recently, so a lot of things look less obvious now.
I've been using Java since version 1.1 and I've seen features added that required a new version of the language spec to go through committee and get implemented before we got to use them where you could just add these features yourself at the top of any Lisp file and then immediately start using them. For example consider the try-with-resources [0] syntax sugar. So instead of thinking "I never actually ran into a need for them", think of new language features that you have started using: those are the kinds of things you could've added yourself.
Also look at any kind of code-gen tooling like parser generators or record specifications like Protocol Buffers as examples of what you could do within the language.
> This is useful for compiler programmers, or maybe also those writing source code analyzers/optimizers, but is that it? On occasion I have had to write DSLs for the user input, but in these cases the (non-programmer) users didn't want to write Lisp so I used something like Haskell's parsec to parse the data.
If you're talking about Haskell, you should be talking about folk who write template Haskell, which is the macro system for GHC. There are plenty of Haskell programmers who know how to write Template Haskell, and there are plenty of Haskell programmers who don't. By contrast, I don't think there's a single Lisp programmer who can't write Lisp macros.
That's homoiconicity. Once you learn Lisp, you automatically know how to write Lisp macros. Once you learn Haskell (or Ocaml or Rust), you don't automatically know how to write macros in that language (and the macro system may not even be portable across compilers).
Now, stuff that would have to be implemented in the compiler to update the language, can now be written just as a "normal" program, that adds whole new features to your programming language.
For example, the entire object system in Common Lisp was implemented as macros.
Yes, most programming tasks don't require this kind of power. But it does mean programming in Lisp it's very very rare you are going to be stuck because your programming language doesn't implement some feature you need for the task at hand.
> For example, the entire object system in Common Lisp was implemented as macros.
It wasn't. The original Object System implementation of Common Lisp is has tree layers: at the bottom layer it is object-oriented (especially the Meta-Object Protocol), then there is a functional layer and on top there are macros for the convenience of the user.
I actively want people to NOT change constant to c. I want the language to be a predictable shared base, not something I have to relearn and customize for every project.
I'm not a language designer. That's hard. That takes time and effort. And I don't do anything that can't be done in Python or Dart or whatever. Customizing the language is time not spent on the project, and batteries included stuff already has everything I need.
I think lisp is good for people who "think inside their heads", as in, they think "I want to do this, oh, I could do it this way, then I'd need this resource, let's build it".
If you think "Interactively" as in "I want to do this, what does Google tell me others are doing, oh yeah, this was made almost exactly the same use case, I'll start with this resource, now I'll adapt my design to fit it", you might not have any ideas for language tweaks to make in the first place.
I basically never code and think "I wish I could do that in this language" aside from minor syntactic sugar and library features. New abstractions and ideas don't just pop up in my mind, what the common mainstream languages have is the entirity of what programs are made of, as far as I'm concerned.
I don't entirely disagree that _changing_ the language is a bit of a no no, like changing the behavior of existing keywords and operators.
However, any program that declares a variable or new function could be said to extend the language, since, if you declare some function, well, that function is now, at least in any proper language, as much part of the language of that program, as any builtin function is.
Sure, if all you have is an empty .c file, you can say "this program is standard C", but as soon as you've declared a variable or funciton, your program becomes in a way a superset of C, it is all the standard C plus the functions, datastructures and variables that you've defined, and to extend that program, it is not enough to keep strictly to the standard language, you must also take into consideration the superset of functionality that is part of the program..
In this way, programming is much more about creating a language that speaks natively in the abstractions of the domain, and then using that language to solve specific tasks within the domain.
And so it becomes that, you're always tweaking the language, it's just the degree to which you can tweak it that is different..
> This is useful for compiler programmers, or maybe also those writing source code analyzers/optimizers, but is that it?
It is also useful for anyone wanting to implement language-level features as simple libraries. Someone else brought up Nim here: it's a great example of what can be done with metaprogramming (and in a non-Lisp language) as it intentionally sticks to a small-but-extendable-core design.
There's macro-based libraries that implement the following, with all the elegance of a compiler feature: traits, interfaces, classes, typeclasses, contracts, Result types, HTML (and other) DSLs, syntax sugar for a variety of things (notably anonymous functions `=>` and Option types `?`), pattern matching (now in the compiler), method cascading, async/await, and more that I'm forgetting.
Some Lisp advocates may tell you that Lisp even does not have hygienic macros. Lisp dialects like Scheme have. Lisp usually has procedural macros, which transform source code (in the form of nested lists), not ASTs (like Nim).
That Nim has 'powerful hygienic macros' is fine, many languages have powerful macros.
As part of delivering an e-commerce solution in Scheme, I wrote a module which allowed for SQL queries to be written in s-expression syntax. You could unquote Scheme variables or expressions into the queries and it would do the right thing wrt quoting, so no connection.prepareStatement("INSERT INTO CUSTOMERS (NAME, ADDRESS) VALUES (?, ?)", a, b) type madness. Wanky, you bet. But oh so, so convenient.
It's not tied to complex DSLs and compilation. I was frustrated by the lack of f-strings in emacs lisp so I hacked a macro, now I have embedded scoped string interpolation. I can write (f "<{class} {fields}>"). Having these freedom is really not a niche, it's mentally and pragmatically important.
I think both the idea and execution are great. These games would be useful even just as problem statements. I like that each problem clearly defines the desired tone and goals, and that the sample solutions have explanations.
Both in technical and creative writing, I agree that the main issue I've seen is unnecessary filler words, needlessly complicated sentences, and a difficulty clearly expressing the point and staying on-topic.
Some ideas
- A copy of the original text with highlighted words above the editor might be nice
- Not sure if the timer is helpful, might cause people to do a poor job for fear of running out of time. Could start without a timer and add it in as users get more practice
If you click the "i" button in the bottom-left during a game, you'll see the original text with required terms highlighted. Lots of people seem to miss that so I need to figure out a way to make it more clear.
In early testing, people seemed to enjoy the challenge the timer provides. But yeah to be honest, I personally don't like it...I'm a slow writer and hate to be rushed. Paid users can disable the timer.
I don't think people want forward secrecy for their email. If they get a new computer, they probably want all their mail on there, right? Isn't porting over their email efficiently at odds with forward secrecy? Also, is forward secrecy compatible with any kind of encrypted search (I know most encrypted search schemes leak too much these days, but if the alternative is not encrypting email at all...)?
Also, how would it work with multiple people in a thread that can be added/removed arbitrarily, or email addresses that resolve to multiple users? Messaging and email seem like different models to begin with.
Keeping old messages around for all practical purposes negates forward secrecy in any messaging system. It isn't just an email issue. If they can get your secret key they can pretty much for sure get your old messages.
Most email users keep their messages in cloud storage (IMAP) so that changing computers is a non-issue. OpenPGP is an encrypt once scheme so that messages on an IMAP server are encrypted and stay encrypted.
Systems that lack forward secrecy are by design incapable of preventing archives of eventually-plaintext messages. There's nothing you can do about it; every message you send is irrevocably a part of the adversary's record, and, because you rely on a single long-term key, you know eventually that record will be plaintext. That's why forward secrecy is such a big deal, and why every modern messaging cryptosystem uses ephemeral keys.
But this only applies for messages that are deleted on your local device (either manually or through an automatic timer). Otherwise, whatever adversary stole your keys can steal your message archive too, they're on the same device. Now, assuming you aren't going to be deleting most of your mail, I don't see how forward secrecy is "such a big deal" in this scenario. It's certainly nice to have, but it definitely has drawbacks wrt the features I mentioned earlier.
Post-compromise security, on the other hand, makes more sense, since the future messages don't exist yet.
You can't meaningfully delete messages in non-forward-secret systems, because part of the premise of all these systems is that your adversary is recording everything.
I agree with you there. But is your point that any secure email system must critically have forward secrecy, or its insecure? Even though forward secrecy really only gives you any benefit for the messages that you delete, which most people don't in the context of email?
Just thinking, if people had the option between 1) deleting their mail and 2) email search, secure (unlike WhatsApp) and easy (unlike Signal) backups, ability to offload your email archive to the server (it's common to have gigabytes of mail, do you want to store all of it on a mobile phone forever? what happens if you drop it in a river?), and so on, don't you think people would go for option 2?
This is all disregarding the specifics of PGP-encrypted mail, for which I agree is not great.
The point is that one of the basic properties of messaging encryption is forward secrecy. It's an argument about how messaging is different from backup, package signing, file encryption, distributed logs, file transfer, and secure transports (though some of these really want forward secrecy too), and how PGP advocates back-rationalize not needing forward secrecy so they can defend their weird archaic tool.
Your argument is that forward secrecy is important in messaging because forward secrecy is important in messaging?
I'm not trying to be argumentative here, I actually don't understand what the reason it's so critical is, nor have I really found any explanations online. For text messaging where you don't really go back to read your old messages, sure, forward secrecy makes sense. Email seems to be a different story where user expectation is different and forward secrecy both precludes many desired features and also doesn't provide significantly more security, other than in very limited circumstances.
Also, I'm not an advocate of PGP at all. If people can use Signal for their usecase, great! They should do that. But Signal's model does not work for everyone's usecases. How do I send a Signal message to [email protected] to report a vulnerability? Is the entire security team supposed to share a mobile phone with Signal on it? What about banks that need to send secure email to each other, but must retain all messages for compliance purposes? (Again, I'm not advocating that PGP should be used in this scenario either, just that there's room for a better solution here, possibly without forward secrecy by default).
The premise of cryptographically secure messaging is that you have an adversary recording all your message traffic.
Lack of forward secrecy implies, logically, that if your long-term secret is ever compromised, every message you've ever sent is recoverable from the adversary's archive.
The point of forward secrecy is to break that attack, so that your adversary needs your long-term secret at the time it was used to send a message; having it after the fact doesn't help.
I'm sometimes in the mood to write long posts and comments explaining this stuff, but today, on the bottom of this old thread, if you're trying to make a point about PGP vs. Signal and don't know how forward secrecy works, I'm probably the wrong person to have this conversation with.
>The premise of cryptographically secure messaging is that you have an adversary recording all your message traffic.
Agreed.
>Lack of forward secrecy implies, logically, that if your long-term secret is ever compromised, every message you've ever sent is recoverable from the adversary's archive.
Also agreed. I am trying to say that this only gives you better security for messages that you have deleted on your device, because if you haven't, regardless of whether your protocol is forward-secret or not, the adversary that has the power to compromise your device will get access to the message the plaintext of which is on the device, even if the keys aren't. Thus, the scope is significantly limited, unless you have a policy to regularly delete old messages on your device, and most people do not want this for email.
I can assure you I understand the cryptographic properties of forward secrecy. I don't understand your claim that it is a strict requirement for every secure messaging system, including an email-like usecase.
>I'm sometimes in the mood to write long posts and comments explaining this stuff, but today, on the bottom of this old thread, if you're trying to make a point about PGP vs. Signal...
I already said several times I don't care about PGP. I feel like you're not really reading or responding to any of my arguments about why forward secrecy doesn't really help you much in most users' threat models or why it precludes various desirable features (of course, I could be wrong here, which is what I'm asking about). Thanks for your time anyway.
Again: without forward secrecy, you can't delete old messages, because your adversary has already recorded them. The point is that a lack of forward secrecy creates a subtle limit to the security that can be achieved.
Yes, that's exactly what we both agreed on 6 comments ago. The question is, is the ability to delete messages critical enough to require in any possible secure messaging solution, at the expense of features like email search, archiving, backup, and transfer-to-new-device?
>Systems that lack forward secrecy are by design incapable of preventing archives of eventually-plaintext messages.
That is not what is being claimed here. Unless you add extra security in the form of something like a strong unique passphrase for the archived messages then an attack that gets the private key also gets the archived messages. In general, if you have a more secure method for protecting the archived messages you could of used it to protect the private key. It is effectively the same problem.
Adding to that, is there a forward secrecy solution to email? I believe this happens in TLS during negotiation, but a similar thing doesn't really exist in one-way communications.
Assuming you don't want to keep any "chain state" in between messages (which seems reasonable), you can always consume a fresh one-time key of the recipient for every message. The first downside is how you know that the one-time key hasn't been reused, for this you can either trust the service provider or use blockchain or blockchain-like technologies. Second downside is that the user has to be online to generate a ton of one-time keys. I believe puncturable encryption helps with this so the recipient can "puncture" their key at the used-up key identifiers, and thus doesn't have to be online all the time. No idea how practical this is.
When doing some research for a final project about using Grobner bases for cryptography, I came across an interesting paper titled aptly titled "Why You Cannot Even Hope To Use Grobner Bases in Public-Key Cryptography: An Open Letter to A Scientist Who Failed and a Challenge to Those Who Have Not Yet Failed".
Not only is this paper written in a very wry style not super common in math papers (it is addressed to "Dear Deluded Author"), it seems all the authors are pseudonyms: Boo Barkee, Deh Cac Can, Julia Ecks, Theo Moriarty, and R.F. Ree. And it includes a large quote from Trithemius' (a 15th century occultist who wrote several books on magic that were actually "encrypted" books on early cryptography in disguise.) Steganographia in the abstract.
When I tried to do some research on this mysterious paper I couldn't really find any references or explanations for who these people are, where they are from, or why this wrote this paper. The only thing I could come up with is "Boo Barkee" sounds a lot like "Bourbaki", the last name of a pseudonymous group of French mathematicians [https://en.wikipedia.org/wiki/Nicolas_Bourbaki].
All this to say, does anyone here know about this paper or who the authors are? Why is it all so mysterious? Is there supposed to be hidden steganography'd messages inside the paper itself? Are the other authors' names also references?
As for my final project, I ended up not being able to figure out a way to use Grobner bases for cryptography.
“The name of Boo Barkee, who lived in Ithaca, NY, is known
for several papers he published alone (Barkee 1988) and
with his colleagues (Barkee, Dennis, and Wang 1990,
Barkee, Can, Ecks, Moriarty, and Ree 1994). As one can
read in the work of Kreuzer and Robbiano (2005): [...]
The truth is that Boo Barkee was a dog belonging
to Moss Sweedler, who while writing his paper on cryptography
decided to use his dog’s name as a pseudonym.
At least two of Barkee’s coauthors were using fake names
too: Deh Cac Can was a pen name of D. Naccache, and Theo
Moriarty was in fact Teo Mora. Julia Ecks and Richard Francis
Ree have not disclosed their identities”
— https://link.springer.com/article/10.1007%2Fs00283-017-9763-...
Thanks for the reference. Haven't looked into this in a few years, cool to see something has shown up about it now. I guess Sweedler was just having some fun then?
Still curious about the Trithemius quote and if it has any particular relevance to the paper.