The Defense Production Act could be used to meet these ends. SpaceX is a defense contractor and exists at the privilege of the US Government for the US Government.
The Defense Production Act could be used to meet these ends. SpaceX is a defense contractor and exists at the privilege of the US Government for the US Government.
Coming from several people who work with SpaceX, there is a dedicated group of people that exist to distract Elon from all vital SpaceX functions.
SLS is on track to be more expensive when adjusted for inflation per moon mission than the Apollo program. It is wildly too expensive, and should be cancelled.
This coupled with the fact that the rocket is incapable of sending a manned capsule to low earth orbit which is the the lunar gateway is planned to a Rectilinear Halo Orbit instead.
Those working in the space industry know that SpaceX’s success is not because of Elon but instead Gwynne Shotwell. She is the President and CEO of SpaceX and responsible for all things SpaceX. The best outcome after the election is to remove Elon from the board and revoke his ownership of what is effectively a defense company for political interference in this election. Employees at SpaceX would be happy, the government would be happy, and the American people would be happy.
The technical definition of AI in academic settings is any system that can perform a task with relatively decent performance and do so on its own.
The field of AI is absolutely massive and includes super basic algorithms like Dijsktra’s Algorithm for finding the shortest path in a graph or network, even though a 100% optimal solution is NP-Complete, and does not yet have a solution that is solveable in polynomial time. Instead, AI algorithms use programmed heuristics to approximate optimal solutions, but it’s entirely possible that the path generated is in fact not optimal, which is why your GPS doesn’t always give you the guaranteed shortest path.
To help distinguish fields of research, we use extra qualifiers to narrow focus such as “classical AI” and “symbolic AI”. Even “Machine Learning” is too ambiguous, as it was originally a statistical process to finds trends in data or “statistical AI”. Ever used excel to find a line of best fit for a graph? That’s “machine learning”.
Albeit, “statistical AI” does accurately encompass all the AI systems people commonly think about like “neural AI” and “generative AI”. But without getting into more specific qualifiers, “Deep Learning” and “Transformers” are probably the best way to narrow down what most people think of when they here AI today.
Valve is a unique company with no traditional hierarchy. In business school, I read a very interesting Harvard Business Review article on the subject. Unfortunately it’s locked behind a paywall, but this is Google AI’s summary of the article which I confirm to be true from what I remember:
According to a Harvard Business Review article from 2013, Valve, the gaming company that created Half Life and Portal, has a unique organizational structure that includes a flat management system called “Flatland”. This structure eliminates traditional hierarchies and bosses, allowing employees to choose their own projects and have autonomy. Other features of Valve’s structure include:
Someone did the math and realized we would need a 130% tariff on all goods to replace current income tax revenue.
People’s number one concern is inflation. If that tariff is created we will see 100% inflation over night!
You do realize that every posted on the Fediverse is open and publicly available? It’s not locked behind some API or controlled by any one company or entity.
Fediverse is the Wikipedia of encyclopedias and any researcher or engineer, including myself, can and will use Lemmy data to create AI datasets with absolutely no restrictions.
When I go back to being a Beltway Bandit, I need to remember these!
To add to this insight, there are many recent publications showing the dramatic improvements of adding another modality like vision to language models.
While this is my conjecture that is loosely supported by existing research, I personally believe that multimodality is the secret to understanding human intelligence.
I am an LLM researcher at MIT, and hopefully this will help.
As others have answered, LLMs have only learned the ability to autocomplete given some input, known as the prompt. Functionally, the model is strictly predicting the probability of the next word+, called tokens, with some randomness injected so the output isn’t exactly the same for any given prompt.
The probability of the next word comes from what was in the model’s training data, in combination with a very complex mathematical method to compute the impact of all previous words with every other previous word and with the new predicted word, called self-attention, but you can think of this like a computed relatedness factor.
This relatedness factor is very computationally expensive and grows exponentially, so models are limited by how many previous words can be used to compute relatedness. This limitation is called the Context Window. The recent breakthroughs in LLMs come from the use of very large context windows to learn the relationships of as many words as possible.
This process of predicting the next word is repeated iteratively until a special stop token is generated, which tells the model go stop generating more words. So literally, the models builds entire responses one word at a time from left to right.
Because all future words are predicated on the previously stated words in either the prompt or subsequent generated words, it becomes impossible to apply even the most basic logical concepts, unless all the components required are present in the prompt or have somehow serendipitously been stated by the model in its generated response.
This is also why LLMs tend to work better when you ask them to work out all the steps of a problem instead of jumping to a conclusion, and why the best models tend to rely on extremely verbose answers to give you the simple piece of information you were looking for.
From this fundamental understanding, hopefully you can now reason the LLM limitations in factual understanding as well. For instance, if a given fact was never mentioned in the training data, or an answer simply doesn’t exist, the model will make it up, inferring the next most likely word to create a plausible sounding statement. Essentially, the model has been faking language understanding so much, that even when the model has no factual basis for an answer, it can easily trick a unwitting human into believing the answer to be correct.
—-
+more specifically these words are tokens which usually contain some smaller part of a word. For instance, understand
and able
would be represented as two tokens that when put together would become the word understandable
.
Agreed.
Nevertheless, the Federal regulators will have an uphill battle as mentioned in the article.
Neither “puffery” nor “corporate optimism” counts as fraud, according to US courts, and the DOJ would need to prove that Tesla knew its claims were untrue.
The big thing they could get Tesla on is the safety record for autosteer. But again there would need to be proof it was known.
I am a pilot and this is NOT how autopilot works.
There is some autoland capabilities in the larger commercial airliners, but autopilot can be as simple as a wing-leveler.
The waypoints must be programmed by the pilot in the GPS. Altitude is entirely controlled by the pilot, not the plane, except when on a programming instrument approach, and only when it captures the glideslope (so you need to be in the correct general area in 3d space for it to work).
An autopilot is actually a major hazard to the untrained pilot and has killed many, many untrained pilots as a result.
Whereas when I get in my Tesla, I use voice commands to say where I want to go and now-a-days, I don’t have to make interventions. Even when it was first released 6 years ago, it still did more than most aircraft autopilots.
AFAIK, there’s nothing stopping any company from scraping Lemmy either. The whole point pf reddit limiting API usage was so they could make money like this.
Outside of morals, there is nothing to stop anybody from training on data from Lemmy just like there’s nothing stopping me from using Wikipedia. Most conferences nowadays require a paragraph on ethics in the submission, but I and many of my colleagues would have no qualms saying we scraped our data from open source internet forums and blogs.
Not quite, this was made with a ControlNet. A hybrid image wouldn’t work as well as this does. But the underlying visual phenomena is the same.
This is done by combining a Diffusion model with ControlNet interface. As long as you have a decently modern Nvidia GPU and familiarity with Python and Pytorch it’s relatively simple to create your own model.
The ControlNet paper is here: https://arxiv.org/pdf/2302.05543.pdf
I implemented this paper back in March. It’s as simple as it is brilliant. By using methods originally intended to adapt large pre-trained language models to a specific application, the author’s created a new model architecture that can better control the output of a diffusion model.
I don’t know why you are mentioning Starship when I made no mention of that. Starship HLS is also a dumb idea, but that’s beside the point.
SLS is horribly expensive for what it provides.