The model is fed a few samplings of previous attempts and their evaluations during the optimization of the current algorithm. Using that information, the model is able to combine components of previous attempts into the current attempt at will. That is because all of this is fed into a single prompt, which the LLM can reference arbitrarily. So recombination is well represented here, bringing it closer to a genetic algorithm. In essence, it combines elements from hill climbing, beam search, and genetic algorithms by virtue of its unbounded nature as an LLM.
A murderer who kills 1% of the people he meets has certainly committed a crime. A person who has a 99% chance of being a murderer has not certainly committed a crime.
That is depending on your threshold of certainty. 1% is not that high considering that, according to the OJJDP, 5 milion people were arrested for serious charges in 2019 so with a 1% false positive rate that would be 100,000 people falsely imprisoned every year.
It does not have to be and really can't be 0% but 1% is unreasonably high in my opinion. If it can't be helped then it can't be helped but that isn't necessarily the case with these devices.
Written language is used, in large part, to express sensory data (ex: colors, shapes, events, sounds, temperatures, etc). Abstract models are, through inductive reasoning, extrapolated from that sensory information. So in effect more sensory data should mean more accurate abstract models.
For example, it might take several paragraphs to wholly capture all the meaningful information in one image in such a way that it can be reproduced accurately. Humans, and many animals, process large amounts of data before they are even capable of speech.
The data GPT-3 was provided with pales in comparison. It is unclear whether these GPT models are capable of induction because it may be that they need more or better sanitised data to develop abstract models. Therefore they should be scaled up further until they only negligepbly improve. If even then they, still, are incapable of general induction or have inaccurate models. Then the transformer model is not enough or perhaps we need a more diverse set of data (images, audio, thermosensors, etc).
Growing poppies has been consistently profitable for decades and is likely to remain profitable for the foreseeable future as the demand has yet to decrease.
It is true, however, that as the supply increases the likelihood that some consumers (namely addicts) won't be able to purchase any future batches increases as well (due to overdose, loss of income, etc). The market will likely reach an equilibrium eventually. Increases in supply have been met with increases in sales so the equilibrium hasn't been reached. This hasn't been a relevant consideration so far and there is no indication that it is now relevant. As such, it is hardly irrational or dumb to grow poppies at the moment.
The morality of the practice is another concern entirely. It is notable, however, that they do have legitimate medicinal applications.
As NN models get more advanced generating speech synthesis will get progressively more convincing and less expensive to implement, even if the models aren't built for speech synthesis specifically. The same can be said for image generation/transformation. If we are to continue develop AI then this is likely inevitable. There are benefits to these models for mute people, for example. Adversarial models can be built to detect fake audio samples. Regulation (ex: adding tells/signatures in commercial products) would also help. The government would have to ban most AI research or they would only be prolonging the inevitable.