The incentives to hack the XOne were few. Easy sideloading. No exclusives. Not a great performance per dollar ratio either. It is the opposite of Nintendo consoles if you think about it, and nintendo consoles are notorious for having a really quick homebrew scene.
Every time a console gets hacked, the checklist of SOC security architects grows a little longer. Boot ROMs are written in formally verifiable language, there are hardware glitch detectors, CPUs running in lockstep to guard against glitches, checks against out of order completion of security phases, random delay insertion, and so forth.
When it comes to SOC security, the past is not a good predictor of the present. The previous Nintendo SOC was designed 15 years ago. A lot has been learned since. It's become increasingly harder to bypass these mechanisms.
The fact that it took 13 years to hack the Xbox One is not because it's not an attractive platform: because of its high profile, it has been a popular subject for security research grad students from the moment it was released. And if anything, the complexity of the current hack shows how much SOC security has progressed over the years.
Beggars can't be choosers. I decide how and what I want to donate. If I see a cool project and I want to change something (in what I think) is an improvement, I'll clone it, have CC investigate the codebase and do the change I want, test it and if it works nicely I'll open a PR explaining why I think this is a good change.
If the maintainers don't want to merge it for whatever reasons that's fine and nature of open source, but I think its petty to tell that same user who opened the PR you should have donated money instead of tokens.
Beggars in fact can be choosers. If I give a beggar a rotten sandwich he can look at it and say "nah, I'm good". He can even be less polite and call me names for trying to give him food that is not good to eat. Why would I do that anyway? Well, maybe because I'm trying to build an image that I am a charitable person but I don't want to actually have the effort and costs of producing for him a fresh sandwich. In this scenario why people would take the beggars side.
You're subtly shifting the framing to defend doing something different than the post describes.
It makes it kind of unclear if you don't understand the difference between using CC to "investigate the codebase" so you can make a change which you (implicitly) do understand versus using an LLM to make a plausible looking PR although in actuality "you do not understand the ticket ... you do not understand the solution ... you do not understand the feedback on your PR"
The glasses have in the same hole a led light and a small light sensor (similar to the ones used in monitors to set up auto-brightness).
On start recording the glasses check if the light sensor is above a certain threshold, if it is then it starts recording and turns on the led light.
So, if you start recording and then cover the hole, it keeps recording because the check only happens on start. Even if they wanted to fix this by making the light sensor do a constant check it wouldn't work as the privacy led light indicator is triggering the same sensor, which is a terrible design choice.
And to disable the light is as easy as using a small drill bit and breaking either the light sensor module or the led light. They can detect if it's been tampered with and they put a giant notice saying the privacy light is not working but they still let you record anyways lol.
> Even if they wanted to fix this by making the light sensor do a constant check it wouldn't work as the privacy led light indicator is triggering the same sensor,
The privacy led light could just turn off for a couple of milliseconds (or less) while the light sensor performs its check.
> The privacy led light could just turn off for a couple of milliseconds (or less) while the light sensor performs its check.
True but then that would mean a blinking led light instead of a constant turned on led light, which is a different product requirement from what it currently does.
I don't think the cheap light sensor would have a fast enough polling rate for that. And if you increase the polling rate I will just put a phosphorescent sticker that absorbs and reflects the light coming out of the led with a good enough afterglow that the photoresistor will still pick up as some value and still allow for recording.
Also what is the implication here? If you cover the hole accidentally for one microsecond do you invalidate the whole recording? Does it need to be covered for more than one second, two seconds, ten?
All of that for what? So that in 2 years we can have chinese off-brand clones for 50 dollars that offer no security mechanisms anyways?
We all need to understand this is the new normal, being able to be recorded anywhere anytime. Just like you can get punched in the street anywhere anytime. We only act on things that can be proven to have caused you prejudice in court.
We successfully shamed people out of wearing Google glasses. We also mostly have social norms about when recording with your smart phone is ok. We don't need to accept defeat about these glasses just yet
I feel like it was pretty common to have the red light blinking on and off every second when recording. In that time where it is off during that cycle it would make sense to preform the sensor checks.
Sounds like it would be pretty easy to fake out with a custom circuit too, for those that are willing to go beyond ‘whoops how did that happen’ levels.
The script doesn’t self-update unless you actually want it to via an explicit flag.
It doesn’t have any features other than installing the latest release from the codex GitHub repo and updating it, and I don’t want it to have more features than that. This is not meant to replace any codex installation via a package manager, if you have a package manager where codex is available then by all means install it through that.
Maybe its the tism but I also read that sentence as neutral. You expected very little and you got very little. Why would that be positive or negative? Maybe it should be positive because you got what you were expecting? But I would call getting what you expect something neutral, if you expected little and got a lot then that would be positive. If you expected a lot and got little then its negative. But if you expected little and got little the most clear outcome is that its a neutral statement. Am I missing something?
The problem is that people who are depressed often don't have the energy to change their lifestyle to start exercise, which requires significant effort.
That's why psychiatrists will suggest antidepressants or Electroconvulsive therapy (in extreme cases of depression) because clients are unable to help themselves.
Happens for physical heath too. I jumpstarted progress with GLP-1 and Statins and now I probably don't need them because I enjoy exercising and eating well.
> I started programming when I was seven because a machine did exactly what I told it to, felt like something I could explore and ultimately know, and that felt like magic. I’m fifty now, and the magic is different, and I’m learning to sit with that.
Don't take this the wrong way but this is more of an age thing rather than a technology advancement thing.
Kids growing up nowadays that are interested in computers grow up feeling the same magic. That magic is partly derived from not truly understanding the thing you are doing and creating a mental "map" by yourself. There is nothing intrinsic to computing nowadays that makes it less magic than fiddling around with config.sys, in 50 years there will be old programmers reminiscing of "Remember when all new models were coming out every few months and we could fiddle around with the vector dimensionality and chunking length to get the best of gpt-6.2 RAG? Those were the times".
> There is nothing intrinsic to computing nowadays that makes it less magic than fiddling around with config.sys
There definitely is: the rent-seeking behavior is out of control. As a kid I could fiddle with config.sys (or rather autoexec.bat) while nowadays wrestling a file path out of my phone is a battle and the system files of my phone are kept from me.
>As a kid I could fiddle with config.sys (or rather autoexec.bat) while nowadays wrestling a file path out of my phone is a battle and the system files of my phone are kept from me.
I think the magic happens at different levels of abstraction as time goes by, and it's easy to get stuck.
Us kids could fiddle with autoexec and config to get DOOM going, today's kids can fiddle with a yaml and have a MMORPG that handles 10 000 users from all over the world going.
It's not the same but I can easily imagine it feeling at least equally magical for a kid today.
Why do you allow a mobile handheld computing and communication device to define "computing" ? I understand that they are important devices and lots of people with a hacker mentality would like to be able to hack them the way old folks once hacked DOS. But the current computing environment is much, much wider than iOS/Android, and if you're going to complain about just one aspect of it, I think it would be better to acknowledge that.
In many ways, things like RPi and Arduino have actually massively expanded the realm of totally hackable computing beyond what was even possible for early personal computer users.
As others have said, it's not so much that tinkering opportunities don't exist. It's more there's a slump in the market of doing relatively easy jobs for money. You can hack on esp32 all day, but there aren't many ways to make money doing so. Making software for the iPhone was (and is still, at this point) a pretty good gig.
I figure auto mechanics contended with this 25 years ago. Now it's hard to find someone to replace your water pump, if your vehicle even has one. Like auto mechanics, though, these machines still exist and there's still a big market for those skills. It might just require more legwork to find that work.
For the same reason computing used to be defined by a Commodore 64 more than by an IBM System/370-XA mainframe from the same year — they're the most commonly and most easily accessible computing devices.
Old farts like us think the desktop is the default kind of computer, but it isn't. Most computers are phones, followed by tablets and laptops with touchscreens, and desktops are the weirdest ones.
It's not a question of what's the most common. It ought to be a question of what capabilities do you think of when you think of "a computer". Most people do not think of their phone as "a computer", even though us tech heads all know that it is literally just that.
We need to follow the lead of most people here, and recognize that the phone is a deliberately limited device and its capabilities do not define what "a computer" is or should be or could be.
LLM are not AI, but are a great context search tool when they work.
When people first contact ML, they fool themselves into believing it is intelligent... rather than a massive plagiarism and copyright IP theft machine.
Fun is important, but people thinking zero workmanship generated content is sustainable are still in the self-delusion stage marketers promote.
I am not going to cite how many fads I've seen cycle in popularity, but many have seen the current active cons before. A firm that takes a dollar to make a dime in revenue is by definition unsustainable. =3
I like coding AIs because they're plagiarism machines. If I ask you to do some basic data manipulation operations, I want you to do it in the most obvious, standard way possible, not come up with some fancy creative solution unless it's needed for some reason.
If I'm dockerizing an app, I want the most simple, basic, standard thing - not somebody's hand-rolled "optimized" version that I can't understand.
config.sys was understandable. Now your computer has thousands (probably more) of config.sys-sized components and you are still only one person. The classic UI may improve your ability to find the components (sometimes) but can't reduce the complexity of either the components themselves or their quantity. AI makes it possible to deal with this complexity in a functional way.
Your last point is probably correct though, because AI will also allow systems to become orders of magnitude more complex still. So like the early days of the internet, these are still the fun days of AI, when the tool is overpowered compared to its uses.
$1 might not be a lot to you, but in some countries that's the daily wage. Even in rich countries one dollar for some might be the difference between eating or not eating that day.
Paywalling without any regional pricing consideration it's just going to incentivize people from poor countries to not participate in your project. Maybe that's okay for you but it's something to consider.
You're right, I'm fortunate enough to not have this experience. But not only both food and gas are much more than 1€, but also people in this situation are too focused on finding a way to make money to care about submitting merge requests
reply