Sam Altman is back as CEO of OpenAI after inking a deal to return to the company following immense pressure from employees and investors on the board that ousted him less than a week ago.
Copilot, yes. You can find some reasonable alternatives out there but I don’t know if I would use the word “great”.
GPT-4… not really. Unless you’ve got serious technical knowledge, serious hardware, and lots of time to experiment you’re not going to find anything even remotely close to GPT-4. Probably the best the “average” person can do is run quantized Llama-2 on an M1 (or better) Macbook making use of the unified memory. Lack of GPU VRAM makes running even the “basic” models a challenge. And, for the record, this will still perform substantially worse than GPT-4.
If you’re willing to pony up, you can get some hardware on the usual cloud providers but it will not be cheap and it will still require some serious effort since you’re basically going to have to fine-tune your own LLM to get anywhere in the same ballpark as GPT-4.
Copilot, yes. You can find some reasonable alternatives out there but I don’t know if I would use the word “great”.
GPT-4… not really. Unless you’ve got serious technical knowledge, serious hardware, and lots of time to experiment you’re not going to find anything even remotely close to GPT-4. Probably the best the “average” person can do is run quantized Llama-2 on an M1 (or better) Macbook making use of the unified memory. Lack of GPU VRAM makes running even the “basic” models a challenge. And, for the record, this will still perform substantially worse than GPT-4.
If you’re willing to pony up, you can get some hardware on the usual cloud providers but it will not be cheap and it will still require some serious effort since you’re basically going to have to fine-tune your own LLM to get anywhere in the same ballpark as GPT-4.