Tag Archives: Digital Wellbeing

Secure computer on user side

Taking Control of Your Online Experience: The Case for Client-Side Content Filtering

I love technology. We love movies, we love television, we love technology.
We are a society where technology is used in an almost inhuman manner.

It appears that social media companies are implementing filters to regulate content, with the intention of creating a more suitable experience for consumers and curbing the dissemination of misinformation. However, the COVID-19 pandemic revealed a potential for bias, suppression, and censorship in this process. Such manipulation could also extend to political motives, where information might be skewed to suit a particular agenda.

Control should be on the users side

This raises the question: Why employ content filtering on the server side when it could be done on the client side? Could users have settings that allow them to customize content filtering based on their preferences? There are valid reasons for such an approach. Users may wish to filter out redundant, repetitive, or annoying content, similar to how ad blockers or spam filters function, but on a more sophisticated scale.

This concept could involve a machine learning component, perhaps utilizing a Bayesian filter. Users could prime it with specific keywords, examples of unwanted posts, and other content to be avoided. This filter would reside within the user’s computer, phone, or even as a browser plug-in. By processing the content at the browser level, the filter could operate effectively, unlike at the router level where encrypted content would be inaccessible. This client-side filter would work similarly to a firewall, analyzing and potentially blocking or redirecting content based on predefined criteria.

AI in the users toolbox

Considering concerns about AI alignment and safety, it’s important to note that the genie is already out of the bottle. Machine learning resources and knowledgeable individuals are widespread. Thus, rather than trying to rein in AI development, the focus should be on leveraging it to the user’s advantage. While organizations like OpenAI strive for responsible AI, and governments aim to establish their standards, consumers require a means to navigate through the information landscape. Users would also want to choose their own alignment rather than have a government or corporation choose it for them as a form of soft censorship.

Fake out the Fake News

This approach could even encompass traditional media, which sometimes appears to convey biased narratives. To address this, a client-side filter could be developed. The name for such a tool is open to discussion—whether it’s dubbed a firewall, content blocker, or a “BS Defender.” Regardless of its name, the need for a user-configurable, trainable, and adaptive filter is evident. This filter could operate through reinforcement learning via human preference selection and incorporate Bayesian adaptive learning.

Next Steps

For someone with coding experience spanning four decades, the framework for such a tool is already something that I can imagine on a high level and it could be done in a few different ways. Right now, I am still thinking on it. The potential exists for this idea to evolve into a comprehensive white paper. On the coding side, a proof of concept could be created, perhaps using Python, to showcase the core filtering concept in action. Sharing this concept for consideration and exploration is crucial in a time when content filtering, covering everything from traditional media to social media and ads, is becoming increasingly important. An adaptable and user-driven filter, bridging the gap between the browser and user experience, holds immense value.