-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create a model file from the generated training_state.dat #22
Comments
Unfortunately, the file format femtoGPT is generating is something special to femtoGPT and not a standardized one, so no, you can't directly load training_state.dat into ollama. Although, maybe in the future, we can add the ability to generate standard model formats to femtoGPT :) |
thanks @keyvank for the quick response. So how can i run the model in inference mode ? |
@manojmanivannan Just change the let inference = gpt.infer(
&mut rng,
&tokenizer.tokenize("YOUR INPUT TO THE MODEL"),
100,
inference_temperature,
|_ch| {},
)?;
// Generate 100 character with the currently trained model before
// starting the training loop.
println!("{}", tokenizer.untokenize(&inference)); |
@keyvank : Thanks for developing such a nice project. Can you also help by writing the full code for this inference in the project itself. I am a coder, but don't know rust. I tried the snippet you gave but it is giving lots of error that I don't understand. Now just to test the model generated, I will have learn rust. |
@nitirajrathore Please check my last commit |
Apologies if my question is stupid, is it at all possible to create a model so we can run this generated model on , say, ollama ?
The text was updated successfully, but these errors were encountered: