• 2 Posts
  • 11 Comments
Joined 1 year ago
cake
Cake day: July 7th, 2023

help-circle


  • LLMs are neuronal networks! Yes they are trained with meaningful text to predict the following word, but they are still NN. And after they are trained with with human generated text they can also be further trained with other sources and in another way. Question is how an interaction between LLMs should be valuated. When does and LLM find one or a series of good words? I have not described this and I am also not sure what would be a good way to evaluate that.

    Anyway I am sad now. I was looking forward to have some interesting discussions about LLMs. But all I get is down votes and comments like yours that tell me I am an idiot without telling me why.

    Maybe I did not articulated my thoughts well enough. But it feels like people want to misinterpret what I’m saying.



  • Well get a concept of how physics work (balancing in your example) only by being trained with (random?) still images is a lot to ask imo. But these picture generating NN can produce “original” pictures. They can draw a spider riding a bike. Might not look very good but it is no easy task. LLM’s aren’t very smart, compared to a human. But they have a huge amount of knowledge stored in them that they can access and also combine to a degree.

    Yes well today’s LLM’s would not produce anything if they talk to each other. They can’t learn persistently from any interaction. But if they will become able to in the future, that is where I think it will go in the direction of AGI.


  • Well LLMs don’t learn from any interaction at the moment. They are trained and after that, one can interact with them but they don’t learn anymore. You can fine tune the model with recorded interactions later, but they do not learn directly. So what I am saying is, if this is changed and they keep learning from interactions, as we do, there will be a break through. I don’t understand why you are saying Thant’s not how it works when I am clearly talking about how it might work in the future.

    I also don’t understand why you get upvoted for this and I get down voted just for posting my thoughts about LLMs. To be clear, it is totally fine to disagree with my thoughts but why down vote it?


  • Well, our natural languages are developed over thousands of years. They are really good! We can use them to express our self’s and we can use them to express the most complicated things humans are working on. Our natural languages are not holding us back! Or maybe the better take is, if the language is not sufficient we do expand them how it is necessary! We develop new special words and meaning for a special subjects. We developed math to express and work with laws of nature in a very compact way efficient way.

    Understanding and working with language is the key to AGI.

    Yes, big NN use a lot of power at the moment. Funny example is, when DeepMinds AlphaZero-Go engine beat one of the best human player. The human mind operates on something like 40W or so while AlphaZero-Go needed something like a thousand times of that. And the human even won a few games with his 40W :)

    And yes you are right, AI systems learn very inefficient compared to a human brain. They need a lot more data/examples to learn from. When the AlphaZero chess engine learned by playing against itself, it played billions of chess matches in a few days. So a lot more a human can play in its lifetime.



  • Well, me as a human, yes! We all constantly have an inner dialog that helps us to solve problems. And LLMs could do this as well. It is in principle not so much different from playing chess against yourself. As far as I know, these chess NN are playing against older versions of themself to learn. So it doesn’t have to play against the exact copy of itself.

    Some of the training of image generators is done by two different AIs. AI-1 learns to differentiate between generated and real images and AI-2 tries to trick AI-1 by generate images that AI-1 can’t differentiate from real images. They both train each other! And the result is that AI-2 can create images that are very close to real images. All without any human interaction. But they do need real images as training data.





  • You could also use clonezilla. The arch install media comes with it. It is handy, because you should not boot in the system you want to clone anyway. So you can boot into your arch install media and then do all the disk cloning.

    Funnily I also have to do the same thing (from 1TB M.2 to 2TB M.2) on my desktop. I might do this tomorrow or on Sunday. I plan to do this as I described. Well I don’t need to save a disk image on an external drive like you because I have two M.2 slots on my mother board.

    Maybe write a follow up here how you did it and if everything went well?

    Good luck!

    Edit: Hey my first Lemmy comment! :)