Probably another case of “I don’t want people training AI on my posts/images so I’m nuking my entire online existence”.
Probably another case of “I don’t want people training AI on my posts/images so I’m nuking my entire online existence”.
Without knowing anything about this model or what it was trained on or how it was trained, it’s impossible to say exactly why it displays this behavior. But there is no “hidden layer” in llama.cpp that allows for “hardcoded”/“built-in” content.
It is absolutely possible for the model to “override pretty much anything in the system context”. Consider any regular “censored” model, and how any attempt at adding system instructions to change/disable this behavior is mostly ignored. This model is probably doing much the same thing except with a “built-in story” rather than a message that says “As an AI assistant, I am not able to …”.
As I say, without knowing anything more about what model this is or what the training data looked like, it’s impossible to say exactly why/how it has learned this behavior or even if it’s intentional (this could just be a side-effect of the model being trained on a small selection of specific stories, or perhaps those stories were over-represented in the training data).
There doesn’t appear to be a model anywhere, unless that has been published completely separately and not mentioned anywhere in the code documentation.
Someone explain to me why there are so many frameworks focused on LLM-based “agents” (LangChain, {{guidance}}, and now whatever this is) and how these are practically useful, when I have yet to find a model that can even successfully perform a simple database query to answer an easy question (searching for one or two items by keyword, retrieving their quantity, and adding the quantities together if applicable) regardless of the model, prompt template, and function API used.
There are only a few popular LLM models. A few more if you count variations such as “uncensored” etc. Most of the others tend to not perform well or don’t have much difference from the more popular ones.
I would think that the difference is likely for two reasons:
LLMs require more effort in curating the dataset for training. Whereas a Stable Diffusion model can be trained by grabbing a bunch of pictures of a particular subject or style and throwing them in a directory, an LLM requires careful gathering and reformatting of text. If you want an LLM to write dialog for a particular character, for example, you would need to try to find or write a lot of existing dialog for that character, which is generally harder than just searching for images on the internet.
LLMs are already more versatile. For example, most of the popular LLMs will already write dialog for a particular character (or at least attempt to) just by being given a description of the character and possibly a short snippet of sample dialog. Fine-tuning doesn’t give any significant performance improvement in that regard. If you want the LLM to write in a specific style, such as Old English, it is usually sufficient to just instruct it to do so and perhaps prime the conversation with a sentence or two written in that style.
WizardLM 13B (I didn’t notice any significant improvement with the 30B version), tends to be a bit confined to a standard output format at the expense of accuracy (e.g. it will always try to give both sides of an argument even if there isn’t another side or the question isn’t an argument at all) but is good for simple questions
LLaMa 2 13B (not the chat tuned version), this one takes some practice with prompting as it doesn’t really understand conversation and won’t know what it’s supposed to do unless you make it clear from contextual clues, but it feels refreshing to use as the model is (as far as is practical) unbiased/uncensored so you don’t get all the annoying lectures and stuff
AMD GPU support appears to be included in GGML. I don’t see any reason why you wouldn’t be able to split between multiple GPUs as the splitting is handled within GGML itself and not tied to any particular library/driver/backend.
I haven’t tried it so maybe it doesn’t work but for me I have the option to add a second account.
In the drawer tap the “header” area where your username is. This part has a different color background to the main part of the drawer menu. Three options will appear: “Add an account”, “Anonymous”, and “Log out”.
Yes, we are talking about the same thing.
The setting that I am referring to is at “Settings > Post History > Mark Posts as Read”.
EDIT: Note that this will not affect posts that already appear this way, as previously explained. The only difference is that when you open an “unread” post, it will no longer change to being “read”. Existing posts are unaffected. If you use another client, the posts that you read on there may still appear as “read” in Infinity.
I’m pretty sure this is supposed to be determined by the “mark posts as read” setting. However, this only determines whether or not Infinity will mark posts as read through the API. Posts that have been marked as read by another client or the web interface are still displayed with the different colors. There should be a separate setting to ignore the “read” status entirely (and then posts that have been marked as read by a different client won’t be greyed or hidden) but there isn’t. There also doesn’t appear to be a way to mark posts as “unread” again.
In that case ChatGPT is correct, it cannot work with links. You will need to download the video transcript (subtitles) yourself and ask it to summarise that. This definitely works, people have been doing it for months.