Hacker Newsnew | past | comments | ask | show | jobs | submit | gchallen's commentslogin

I teach computing at the University of Illinois. I'm spending a lot of time thinking about how to adapt my own courses and our degree programs. I'm actually at a workshop about incorporating AI into computing education, so this was a timely post to find this morning.

We don't have a coherent message yet. Currently there's a significant mismatch between what we're teaching and the reality of the computing profession our students are entering. That's already true today. Now imagine 2030, when the students we admit today will start graduating. We're having students spend far too much time practicing classical programming, which is both increasingly unnecessary and impedes the ability to effectively teach other concepts. You learn something about resource allocation from banging out malloc by hand, but not as much as you could if you properly leveraged coding agents.

Degree programs also take time and energy to update, and universities just aren't designed to deal with the speed of the changes we're witnessing. Research about how to incorporate AI in computing education is outdated before the ink is dry. New AI degrees that are now coming online were designed several years ago and don't acknowledge the emergent behavior we've seen over the past year. Given the constraints faculty operate under, it's just hard to keep up. I'm not defending those constraints: We need to do better at adapting for the foreseeable future. Creating the freedom to innovate and experiment within our educational systems is a bigger and more fundamental challenge than people realize, and one that's not getting enough attention. We have a huge task ahead to update both how and what we teach. I'm incorporating coding agents into my introductory course (https://www.cs124.org/ai) and designing a new conversational programming course for non-technical students. And of course I'm using AI to accelerate all of this work.

Emotionally, most of my colleagues seem to be stuck somewhere on the Kübler-Ross progression: denial (coding agents don't work), anger (coding agents are bad), bargaining (but we still need to teach Python, right?), depression (computing education is over). We're scared and confused too: acceptance is hard when you don't know what's happening next. That makes it hard to effectively communicate with our students, even if there's a clear basis for connection. Also keep in mind that many computing faculty don't code, and so lack a first-hand perspective on what's changing. (One of the more popular posts about how to use AI effectively on our faculty Slack was about correcting LaTex formatting for a paper submission. Sigh.)

Here's what I'm telling students. First, if you use AI to complete an assignment that wasn't designed to be completed with AI, you're not going to learn much: not much about the topic, or about how to use AI, since one-shotting homework is not good prompting practice. Second, you have to learn how to use these new tools and workflows. Most of that will need to be done outside of class. Start immediately. Finally, speak up! Pressure from students is the most effective driver of curricular change. Don't expect that the faculty teaching your courses understand what's happening.

Personally I've never been more excited to teach computing. I'm a computing educator: I've always wanted my students to be able to build their castles in the sky. It was so hard before! It's easier now. Cue frisson. That's going to invite all kinds of new people with new ideas into computing, and allow us to focus on the meaningful stuff: coming up with good ideas, improving them through iterative feedback, understanding other problem domains, and caring enough to create great things.


Hi. Current CS undergrad here. I find this claim highly disagreeable: > You learn something about resource allocation from banging out malloc by hand, but not as much as you could if you properly leveraged coding agents.

I think you are underestimating the effectiveness of "reinventing the wheel" to become an effective engineer through the act of building and discovery. Consider common undergrad CS projects: building a compiler, building a file system, building a text editor, or writing an implementation of malloc. Undergrads find these tasks grueling and conceptually challenging, and learn to improve their understanding of computing concepts and software design patterns by struggling to implement these things by hand. If you explain the concepts to an ugrad and then hand them Claude Code to use for the implementation, you are defanging an otherwise significant obstacle that would have stimulated growth.

Undergrads learn by struggling. My friends and I like to say, bombastically, that the only way to teach an undergrad is to torture them.

I would much prefer to learn programming the classical way, and let my employer empower me with LLM technology once I have demonstrated proficiency in software engineering.


How will the grads pass an lc interview if they don't do classical programming (or do I misunderstand what it means)? But it also raises another questions - what is the future for leetcoding?

the slow speed of adoption in education has a positive face; that is it filters out some of the hype.

the first derivative is smoother.

not always a bad thing.


I've built several bespoke "apps" that are essentially Claude Code + a folder with files in it. For example, I have Claude Coach, which designs ultimate frisbee workouts for me. We started with a few Markdown files—one with my goals, one with information about my schedule, another with information about the equipment and facilities I have access to, and so on. It would access those files and use them to create my weekly workout plans, which were also saved as files under the same folder.

Over time this has become more sophisticated. I've created custom commands to incorporate training tips from YouTube videos (via YT-DLP and WhisperX) and PDFs of exercise plans or books that I've purchased. I've used or created MCP servers to give it access to data from my smart watch and smart scale. It has a few database-like YAML files for scoring things like exercise weight ranges and historical fitness metrics. At some point we'll probably start publishing the workouts online somewhere where I can view and complete them electronically, although I'm not feeling a big rush on that. I can work on this at my own pace and it's never been anything but fun.

I think there's a whole category of personal apps that are essentially AI + a folder with files in it. They are designed and maintained by you, can be exactly what you want (or at least can prompt), and don't need to be published or shared with anyone else. But to create them you needed to be comfortable at the command line. I actually had a chat with Claude about this, asking if there was a similar workflow for non-CLI types. Claude Cowork seems like it. I'll be curious to see what kinds of things non-technical users get up to with it, at least once it's more widely available.


This resonates a lot. And we’re working on something in the same space: a way to build MCP aps for non technical people. If there are builders here who like experimenting, we’re looking for beta testers: -> https://manifest.build


Teach high school English.


As a teacher, I agree. There's a ton of covert AI grading taking place on college campuses. Some of it by actual permanent faculty, but I suspect most of it by overworked adjuncts and graduate student teaching assistants. I've seen little reporting on this, so it seems to be largely flying under the radar. For now. But it's definitely happening.

Is using AI to support grading such a bad idea? I think that there are probably ways to use it effectively to make grading more efficient and more fair. I'm sure some people are using good AI-supported grading workflows today, and their students are benefiting. But of course there are plenty of ways to get it wrong, and the fact that we're all pretending that it isn't happening is not facilitating the sharing of best practices.

Of course, contemplating the role of AI grading also requires facing the reality of human grading, which is often not pretty. Particularly the relationship between delay and utility in providing students with grading feedback. Rapid feedback enables learning and change, while once feedback is delayed too long, its utility falls to near zero. I suspect this curve actually goes to zero much more quickly than most people think. If AI can help educators get feedback returned to students more quickly, that may be a significant win, even if the feedback isn't quite as good. And reducing grading burden also opens up opportunities for students to directly respond to the critical feedback through resubmission, which is rare today on anything that is human-graded.

And of course, a lot of times university students get the worst of both worlds: feedback that is both unhelpful and delayed. I've been enrolling in English courses at my institution—which are free to me as a faculty member. I turned in a 4-page paper for the one I'm enrolled in now in mid-October. I received a few sentences of written feedback over a month later, and only two days before our next writing assignment was due. I feel lucky to have already learned how to write, somehow. And I hope that my fellow students in the course who are actual undergraduates are getting more useful feedback from the instructor. But in this case, AI would have provided better feedback, and much more quickly.


Immediate feedback from a good autograder provides a much more interactive learning experience for students. They are able to face and correct their mistakes in real time until they arrive at a correct solution. That's a real learning opportunity.

The value of educational feedback drops rapidly as time passes. If a student receives immediate feedback and the opportunity to try again, they are much more likely to continue attempting to solve the problem. Autograders can support both; humans, neither. It typically takes hours or days to manually grade code just once. By that point students are unlikely to pay much attention to the feedback, and the considerable expense of human grading makes it unlikely that they are able to try again. That's just evaluation.

And the idea that instructors of computer science courses are in a position to provide "expert feedback" is very questionable. Most CS faculty don't create or maintain software. Grading is usually done by either research-focused Ph.D. students or undergraduates with barely more experience than the students they are evaluating.


We've been doing this at Illinois for 10 years now. Here's the website with a description of the facility: https://cbtf.illinois.edu/. My colleagues have also published multiple papers on the testing center—operations, policies, results, and so on.

It's a complete game changer for assessment—anything, really, but basic programming skills in particular. At this point I wouldn't teach without it.


And the proud Illinois tradition of some mission-critical service crashing on the first day of class continues.

In this case, it is an external service. However, I also suspect that the Duo outage is probably shielding other on-campus services from load surges that would probably be causing them to get crashy.

I guess I don't know how we could ever prevent such incidents. Given that the first day of classes is a well-kept secret /s.


> That said I’m not sure how much gender based affirmitive action there is in science/engineering today.

Potentially quite a bit. Here's some recent data about admissions into the highly-competitive Illinois CS program: https://www.reddit.com/r/UIUC/comments/12kwc4a/uiuc_cs_admis...

Note that admissions rates for female applicants are higher across all categories—international, out-of-state, and in-state. Obviously you can't fully tell what's going on here without more of an understanding of the strengths of the different pools, but a 10–30% spread (for in-state) suggests that gender is being directly considered.

IANAL, but I'm also concerned about the degree to which this decision affects the use of other factors during college admissions. Fundamentally admissions is a complex balance between prior performance and future potential, and only admitting based on prior performance means that we're stuck perpetuating existing societal inequities.


I do know that 25 years ago or so there considerable weight given to gender in sciences and engineering. I do feel like all talk of it has disappeared, and wasn't sure if it was because it was no longer a factor or because race became the dominant talking point.

From the data you present I suspect that there is weight still given to gender. I wonder how much energy there would be to investigating this? I wonder how many guys who get rejected from MIT CS will now do Tik Toks about how a girl took his spot, since he can no longer say it was a black kid?


Harvey Mudd seems to discriminate heavily in favour of women.


As a CS faculty member at Illinois (aka UIUC), I don't think that we fit this model.

At least according to my quick reading of the article, ASU has a significant focus on inclusion as a core value. Overall Illinois does admit a large percentage of applicants: about 50% over recent years. (The number dropped a bit after we began participating in the Common App, which makes it easier for students to increase the number of institutions they apply to.)

However, that number hides the fact that admission to top programs like computer science is extremely selective and exclusive. Admission rates to CS have been around 7% recently. And while we've made a CS minor somewhat more accessible, we've also closed down pathways that allowed students to start at Illinois and transfer into a computer science degree. (At this point that's pretty much impossible.) We do have blended CS+X degree programs that combine core studies in computer science with other areas, and those are less selective, but they have their own limitations—specifically, having to complete a lot of coursework in some other area that may not interest you.

I think what's fooling you about Illinois is the fairly odd combination of a highly-selective department (CS) embedded in a less-selective institution. I'm sure that there are other similar pairings, but overall this is somewhat unusual. If you think about other top-tier CS departments—Stanford, Berkeley, MIT, CMU—most are a part of an equally-selective institution.

So with Illinois you're getting the cache of an exclusive department combined with the high acceptance rate of an inclusive public land-grant university. But on some level this is a mirage created by colocated entities reflecting different value systems. And, unlike places like Berkeley and Virginia, which have been trying to admit more students into computing programs, no similar efforts are underway here at Illinois. (To my dismay.)

Overall, unfortunately it's still very obvious to me that exclusivity is part of what we're selling to students as a core value of our degree program. You're special if you got in—just because a lot of other people didn't. Kudos to anyone moving away from this kind of misguided thinking.


I've taught CS1 for going on 6 years now, to almost 10K students. I'll admit I found this post depressing to read. Because many of these pain points have obvious solutions that have been in use at many institutions for years. (Some of these solutions are in the paper, but are not novel.)

When / where are students struggling? Assess them frequently and you'll find out! We run weekly quizzes in my class. So we know exactly who's struggling with exactly what, and quickly. That allows us to do individual outreach, and for students to catch up before they get too far behind. We also use daily homework, for the same reasons. But a lot of CS1 courses are still using the outdated midterm and final model, maybe with some homework sprinkled in.

Frequently a glut of repetitive student questions points to bad course materials or poor course design. Make things more clear and make it easier for students to find information and at least some of the repetitive question asking will diminish.

Grading and TA support are related. Graduate TA quality does vary greatly, and you need to design around this. For example: Never put students in a position to suffer for an entire semester at the hands of a bad TA. (Many courses do.) Undergraduates are almost always better at assisting with early CS courses, and usually cheaper. We've been shifting gradually toward more undergraduate support for our CS1 course, and it has been working out well. They frequently outperform graduate staff.

But no amount course staff will be sufficient if you have them spend all of their time on tedious tasks that computers can do better: Like grading code! It's 2023. If you can't deploy your own autograder, buy one. Staff time grading code should be minimized or eliminated altogether. Freeing staff time for student support allows you to provide students with more practice, and accelerates the overall learning process. But many early CS courses are stuck in a situation where staff grading is bottlenecking how many problems they can assign. That's insane, when autograding is a well-established option. (Even if you want to devote some staff time to grading code quality, autograding should always be used to establish correctness. And you can automate many aspects of code quality as well.)

In my experience, what's at the root of a lot of these problems is simply that many people teaching introductory CS can't build things. Maybe they can implement Quicksort (again), but they can't create and deploy more complex user-facing systems. I mean, you can create an autograder using a shell script! Not a great one, but still far superior to manual human grading. Part of this is because these jobs pay poorly. Part is how we hire people for them, because the ability to build things isn't typical a criteria. Part of it is that there's little support for this in academia. It took me years of inane meetings to get a small cluster of machines to run courseware on for my 1000+ student class that generates millions of dollars in revenue.

But there's also a degree to which the CS educational community has started to stigmatize expert knowledge. If you do enjoy creating software and are good at it, you get a lot of side eye from certain people. "You know that students don't learn well from experts, right?" And so on. Yes, there is a degree to which knowing how to do something is not the same as being able to teach someone how to do it. But would you take music lessons from someone who was not only a mediocre player, but didn't seem to like music that much at all?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: