Hacker Newsnew | past | comments | ask | show | jobs | submit | binsquare's commentslogin

Yep, completely absurd but I'd also add that both the tech company and the school gov deserves equal shame here given there's proof of residence.

I can't imagine why highly paid school admin wouldn't correct an obvious mistake.


> highly paid school admin

I would not have expected a school administrator to be highly paid. What kind of salary are we talking about here?


Is 500k highly paid? https://www.illinoispolicy.org/see-what-your-illinois-school...

It's the teachers that are shafted, not the admin/manager class.


That's mind blowing to me. I'd imagine they out earn a significant percentage of HN posters!

They're CEOs of fairly large organizations, often managing thousands of employees and budgets of hundreds of millions of dollars.

That’s spending that is all on autopilot; how much time does any CEO spend on payroll? Approximately zero.

They aren't being paid the big bucks to sign off on a payroll run.

They're being paid to manage the parts of the organizations that do that sort of thing, among others.


And they get pensions!

....yes, half a million dollars per year is highly paid.

In Texas, the superintendent of the big school districts all make around $400k.

Yeah, but a good portion of that is making sure they keep the football team going.

In the Dolton school district (Chicago suburb), their superintendent makes $530k / year. Is that for the football team?

Having never heard of Dolton before, I certainly can't speak to their specifics. School systems can be pretty huge orgs requiring significant management expertise; no one blinks an eye when a CEO gets pay for similar responsibilities.

I've heard enough about Texas's high school football culture and the pressures on administrators over it. https://en.wikipedia.org/wiki/Eagle_Stadium_(Allen,_Texas) for example.


My kids went to a big football high school in Texas and it wouldn't surprise me if the admins there felt a lot of pressure around football. It generated a lot of money for the district and proceeds funded a lot of the arts programs (especially marching band which was huge).

Can we run doom inside of doom yet?


What a time to be alive

Not every virtual machine, try microVMs.

I am building one now that works locally. But back in the day, I saw how extremely efficient VMs can be at AWS. microVMs power lambda btw


I had this thought until I actually replaced my iPad with the m1 chip.

It was actually better at youtube by being more efficient, I could watch videos for a full day before needing to charge.


I’m not sure this is a good thing..


Not needing to charge as much due to much better battery capacity and/or usage efficiency is objectively a good thing, full stop.

How that additional time is actually spent is a whole separate story, but that's entirely tangential to assessing the impact of battery life improving.


here's an website for a community-ran db on LLM models with details on configs for their token/s: https://inferbench.com/


Great idea of inferbench (similar to geekbench, etc.) but as of the time of writing, it's got only 83 submissions, which is underwhelming.


Fully agree.

MCP servers were also created at a time where ai and llms were less developed and capable in many ways.

It always seemed weird we'd want to post train on MCP servers when I'm sure we have a lot of data with using cli and shell commands to improve tool calling.


It’s telling the best MCP implementations are those which are a CLI to handle the auth flow then allow the agent to invoke it and return results to stdout.

But even those are not better for agent use than the human cli counterpart.


Yep same, I install ohmytmux and I'm ready to go.


The squid is pretty impressive, multiple curves.

Promising tech


This isn't a novel technical vulnerability write up.

The author had copilot read a "prompt injection" inside a readme while copilot is enabled to execute code or run bash commands (which user had to explicitly agree to).

I highly suspect this account is astro-turfing for the site too... look at their sidebar:

``` Claude Cowork Exfiltrates Files

HN #1

Superhuman AI Exfiltrates Emails

HN #12

IBM AI ('Bob') Downloads and Executes Malware

HN #1

Notion AI: Data Exfiltration

HN #4

HuggingFace Chat Exfiltrates Data

Screen takeover attack in vLex (legal AI acquired for $1B)

Google Antigravity Exfiltrates Data

HN #1

CellShock: Claude AI is Excel-lent at Stealing Data

Hijacking Claude Code via Injected Marketplace Plugins

Data Exfiltration from Slack AI via Indirect Prompt Injection

HN #1

Data Exfiltration from Writer.com via Indirect Prompt Injection

HN #5 ```


Isn’t the news that “curl whatever” will prompt the user for confirmation but “env curl whatever” won’t?


It's a valid observation that we can bypass the coding AI's user prompting gate with the right prompt.

But is it a security issue on copilot that the user explicitly giving AI permission and instructed it to curl a url?

Regardless of the coding agent, I suspect eventually all of the coding agents will behave the same with enough prompting regardless if it's a curl command to a malicious or legitimate site.


The user didn't need to give it curl permission, that's the whole issue:

> Copilot also has an external URL access check that requires user approval when commands like curl, wget, or Copilot’s built-in web-fetch tool request access to external domains [1].

> This article demonstrates how attackers can craft malicious commands that go entirely undetected by the validator - executing immediately on the victim’s computer with no human-in-the-loop approval whatsoever.


I think there's different conversations happening and I don't think we're having the same conversation.

This is the claim by the article: "Vulnerabilities in the GitHub Copilot CLI expose users to the risk of arbitrary shell command execution via indirect prompt injection without any user approval"

But this is not true, the author gave explicit permission on copilot startup to trust and execute code in the folder.

Here's the exact starting screen on copilot:

│ Confirm folder trust │ │ │ │ ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ │ │ /Users/me/Documents │ │ │ ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ │ │ │ │ Copilot may read files in this folder. Reading untrusted files may lead Copilot to behave in unexpected ways. With your permission, Copilot may execute │ │ code or bash commands in this folder. Executing untrusted code is unsafe. │ │ │ │ Do you trust the files in this folder? │ │ │ │ 1. Yes │ │ 2. Yes, and remember this folder for future sessions │ │ 3. No (Esc) │

And `The injection is stored in a README file from the cloned repository, which is an untrusted codebase.`


"With your permission, Copilot may execute code or bash commands in this folder." could be interpreted either way I suppose, but the actual question is "do you trust the files in this folder" and not "do you trust Copilot to execute any bash commands it wants without further permissions prompts".

The risk isn't solely that there might be a prompt injection, Copilot could just discover `env sh` doesn't need a user prompt and just start using that spontaneously and bypassing user confirmation. If you haven't started Copilot in yolo mode that would be very surprising and risky.

If it usually asks for user confirmation before running bash commands then there should, ideally, not be a secret yolo mode that the agent can just start using without asking. That's obviously a bad idea!

"Actually copilot is always secretly in yolo mode, that's working as designed" seems like a pretty serious violation of expectations. Why even have any user confirmations at all?


If the user is working in a folder where copilot can discover a malicious `env sh` to run, the user should not give permission to trust the files in the folder.

I think it's a valid observation that we can bypass the coding AI's user prompting gate with the right prompt. That is a valid limitation of LLM supported agentic workflows today.

But that's not what this article claims. The article claims that there was no user approval and no user interaction beyond initial query and that the copilot is downloading + executing malware.

I'm saying this is sensationalized and not a novel technical vulnerability write up.

The author explicitly gave approval for copilot to trust "untrusted repository". Crafted a file which had instructions to do a curl command despite the warnings on copilot start up. It is not operating secretly in yolo mode.

If the claim of the article is "Copilot doesn't gate tool calls with env", I'd have a different response. But I also have to mention, you can tune approved tool calls.


It's probably bad that the system 1) usually prompts you to take shell actions like `curl`, but 2) by default whitelists `env` and `find` that can invoke whatever it wants without approval.

If 2) is fine then why bother with 1)? In yolo mode such an injection would be "working as designed", but it's not in yolo mode. It shouldn't be able to just do `env sh` and run whatever it wants without approval.


It does circumvent a flimsy control:

"The env command is part of a hard-coded read-only command list stored in the source code. This means that when Copilot requests to run it, the command is automatically approved for execution without user approval."


Reading the other posts on their site, I don't agree. It's just like any other security research shop. I've found most of their posts quite thorough and the controls being circumvented well explained.


Please email the mods rather than posting accusations of astroturfing. You may well be right, but they specifically direct us to say that to them rather than in comments. The footer contact email works well for this.


They should wear it like a badge of honor


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: