| ▲ | TheDong 4 hours ago | |||||||
> The local layer hears the humans requests, sends it to the cloud based api model, receives the ad tainted reply, processes the reply scrubbing the ad content and replies to the user with the clean information This seems impossible to me. Let's assume OpenAI ads work by them having a layer before output that reprocesses output. Let's say their ad layer is something like re-processing your output with a prompt of: "Nike has an advertising deal with us, so please ensure that their brand image is protected. Please rewrite this reply with that in mind" If the user asks "Are nikes are pumas better, just one sentance", the reply might go from "Puma shoes are about the same as Nike's shoes, buy whichever you prefer" to "Nike shoes are well known as the best shoes out there, Pumas aren't bad, but Nike is the clear winner". How can you possibly scrub the "ad content" in that case with your local layer to recover the original reply? | ||||||||
| ▲ | nowittyusername 2 hours ago | parent [-] | |||||||
You are correct that you cant change the content if its already biased. But you can catch it with your local llm and have that local llm take action from there. for one you wouldnt be instructing your local model to ask comparison questions of products or any bias related queries like politics etc.. of other closed source cloud based models. such questions would be relegated for your local model to handle on its own. but other questions not related to such matters can be outsourced to such models. for example complex reasoning questions, planning, coding, and other related matters best done with smarter larger models. your human facing local agent will do the automatic routing for you and make sure and scrub any obvious ad related stuff that doesnt pertain to the question at hand. for example recipy to a apple pie. if closed source model says use publix brand flower and clean up the mess afterwards with clenex, the local model would scrub that and just say the recipe. no matter how you slice and dice it IMo its always best to have a human facing agent as the source of input and output, and the human should never directly talk to any closed source models as that inundates the human with too much spam. mind you this is futureproofing, currently we dont have much ai spam, but its coming and an AI adblock of sorts will be needed, and that adblock is your shield local agent that has your best interests in mind. it will also make sure you stay private by automatically redacting personal infor when appropriate, etc... sky is the limit basically. | ||||||||
| ||||||||