At the very least, every school, subject, and teacher should be obliged to conduct experiments during the school year -- A/B/C trials in which various forms of note taking are explored: handwritten, computer-typed, and neither.
Then see how it affects the kids' learning speed and retention of the various subjects. Then they need to compare notes with the other teachers to learn what they did differently and what did or didn't work for them.
Ideally they'd also assess how this worked for different types of students, those with good vs bad reading skills, with good vs bad grades, esp those who are underperforming their potential.
The idea that we would A/B test handwritten vs typed to see what would improve retention is focusing on the wrong thing. It's like A/B testing mayo or no mayo on your big mac to see which version is a healthier meal. No part of the school system is optimized for retention. It's common for students to take a biology class in 9th grade and then never study biology again for the rest of their lives. Everyone knows they won't remember any biology by the time they graduate, and no one cares.
We know what increases retention, it's active recall and (spaced) repetition. These are basic principles of cognitive science have been empirically proven many times. Please try to implement that before demanding that teachers do A/B tests over what font to write the homework assignments in.
You can certainly make it harder to cheat. AIs will inevitably generate summaries that are very similarly written and formatted -- content, context, and sequence -- making it easy for a prof (and their AI) to detect the presence of AI use, especially if students are also quizzed to validate that they have knowledge of their own summary.
Alternately, the prof can require that students write out notes, in longhand, as they read, and require that a photocopy of those notes be submitted, along with a handwritten outline / rough draft, to validate the essays that follow.
I think it's inevitable that "show your work" soon will become the mantra of not just the math, hard science, and engineering courses.
I’m confused by the “at last”, it’s been consistently covered on The Guardian:
iran site:theguardian.com
There is a narrative that has been floating around and it seems like a Russian psyop designed to sow discord (not accusing you of being a bot personally), “the lefties are friends with Iran and don’t complain about their attrocities”, which is objectively false.
That's a great answer that offers concrete insight into what design thinkers are trying to achieve. And it seems like they have a chance to succeed if they also employ iterative experimental methods to learn whether their mental model of user experience is incorrect or incomplete. Do they?
Traditionally you use a lot of paper and experiential prototypes to iterate on, which doesn't cover everything but helps refine assumptions (I sometimes like starting with mocking downstream output like reports and report data, which is a quick way to test specific assumptions about the client's operations and strategic goals, which then can affect the detailed project). When I can, I also try to iterate using scenario-based wargaming, especially for complex processes with a lot of handoffs and edge cases; it lets us "chaos monkey" situations and stress-test our assumptions.
More than once early iterations have led me to call off a project and tell the client that they'd be wasting their money with us; these were problems that either could be solved more effectively internally (with process, education, or cultural changes), weren't going to be effectively addressed by the proposed project, or, quite often, because what they wanted was not what they actually needed.
Increasingly, AI technical/functional prototyping's making it into the early design process where traditionally we'd be doing clickable prototypes, letting us get cheap working prototypes in place for users to test drive and provide feedback on. I like to iterate aggressively on the data schema up front, so this fits in well with my bias towards getting the database and query models largely created during the design effort based on domain research and collaboration.
Another classic example is data scientists trying to model biological processes (or answer questions about processes while ignorant of which components regulate others). Systems biology has a long history of largely clueless attempts to predict outcomes from complex processes that no one understands well enough to model usefully. The biologists know this but the data scientists do not.
My perfect reading chair: the "Skye" model designed by Tord Bjorklund for Ikea in the 1970s. Its shape is essentially like an Adirondack chair connected to an ottoman, but padded and leather covered. Insanely comfy and perfect for reading.
Similar but more famous is the LC4 Chaise Longue designed by Le Corbusier.
Yep. After 40+ years in the business I chose to retire rather than madly pump out code using a robot. Sucked all the joy right out of the craft.
It's also a depressing wakeup call to realize that programming has evolved from a craft in which you used to write 90% of the instructions but with the rise of libraries, and now codebots, 99% of the instructions are written by others. Coding became cut-and-paste decades ago but now it's degenerated into talk-and-walk. Soon there'll be no need for any skill from the code creator at all. The writing is on the wall. Frankensteinian LLMs surely will drive all the engineers from the building.
It was great while it lasted, but... sayonara hackerdom.
On an iPad I can't read the web page at all. The insert at the upper right overlies and obscures the main body of text.
It'd also be a good starting point to be more concrete in your ambitions. What version of C is your preferred starting point, the basis for your "Better C"?
I'd also suggest the name "Dependable C" confuses readers about your objective. You don't seek reliability but a return to C's simpler roots. All the more reason to choose a recognized historical version of C as your baseline and call it something like "Essential C".
It's understandable that unusual patients are seen as confounding variables in any study, especially those with small numbers of patients. Though I haven't read beyond the abstract, it also makes sense that larger studies (phase 3 or 4) should not exclude such patients, but perhaps could report results in more than one way -- including only those with the primary malady as well as those with common confounding conditions.
Introducing too many secondary conditions in any trial is an invitation for the drug to fail safety and/or efficacy due to increased demands on both. And as we all know, a huge fraction of drugs fail in phase 3 already. Raising the bar further, without great care, will serve neither patients nor business.
Having been an "investigator" in a few phase 3 and 4 trials, it is true that all actions involving subjects must strictly follow protocols governing conduct of the trial. It is extremely intricate and labor intensive work. But the smallest violations of the rules can invalidate part of or even the entire trial.
Most trials have long lists of excluded conditions. As you say, one reason is reducing variability among subjects so effects of the treatment can be determined.
This is especially true when effects of a new treatment are subtle, but still quite important. If subjects with serious comorbidities are included, treatment effects can be obscured by these conditions. For example, if a subject is hospitalized was that because of the treatment or another condition or some interaction of the condition and treatment?
Initial phase 3 studies necessarily have to strive for as "pure" a study population as possible. Later phase 3/4 studies could in principle cautiously add more severe cases and those with specific comorbidities. However there's a sharp limit to how many variations can be systematically studied due to intrinsic cost and complexity.
The reality is that the burden of sorting out use of treatments in real-world patients falls to clinicians. It's worth noting level of support for clinicians reporting their observations has if anything declined over decades. IOW valuable information is lost in the increasingly bureaucratic and compartmentalized healthcare systems that now dominate delivery of services.
how do you figure? absolute SAE rate increases 2 percentage points. nothing changes about relative SAE rate. does it change anything about your choice between different health technologies? no.
The SAE rate increases 2 percentage points on average, as I understand it - not necessarily uniformly across interventions. It could be the case that medicine A has 4% SAE in healthy patients, and 5% in unhealthy* ones, whereas medicine B has 3% SAE in healthy and 6% in unhealthy - and without testing on unhealthy patients, you don't know that medicine B is riskier for those patients than A.
It could be that I'm totally misunderstanding, and that every medicine has the same elevation of risk of SAE for unhealthy patients, but that seems unlikely to me. You do have 'doctor' in your username though, so I'm probably embarrassing myself here.
*apologies for the healthy/unhealthy terminology, I don't know the right lingo to use here.
Then see how it affects the kids' learning speed and retention of the various subjects. Then they need to compare notes with the other teachers to learn what they did differently and what did or didn't work for them.
Ideally they'd also assess how this worked for different types of students, those with good vs bad reading skills, with good vs bad grades, esp those who are underperforming their potential.
reply