13 points by adeebaslam 14 days ago | 14 comments
bnolsen 14 days ago
Run your own locally but beware of copyright violations. My company uses some form of copilot that supposedly is guaranteed to be copyright safe. Personally i think it's very bad policy to make your skills dependent on something proprietary, which is exactly what these llm providers are trying to do to make back their billions of investment.
mvieira38 14 days ago
This is one of those things I just gave up at, like fully "degoogling". I ran my own ollama server for a while in my machine, for code completion, but it's just garbage without GPU accel, so I just use ChatGPT and pretend I believe their privacy settings
jart 14 days ago
By using a privacy preserving local LLM like Mozilla's llamafile.
pixxel 14 days ago
[dead]
AaronAPU 14 days ago
For what it’s worth the last company I’d trust for privacy is one who calls itself Privacy AI.
HelloUsername 14 days ago
No direct answers, just wanted to say perhaps this page might interest you as well:

https://duckduckgo.com/duckduckgo-help-pages/duckai/ai-chat-...

gbraad 14 days ago
I do not trust an agent near client code or even PII. Multiple windows of VScode still seem to active from time to time, or by forced update. This is why I do not activate them on my local machine. Only on a VM/container which only contains the codebase under edit.
fortyseven 14 days ago
I avoid cloud based stuff. Though I use GitHub Copilot largely because it's from the early days and it's a great tool (and my employer pays for it). But otherwise, local LLM or gtfo.
whs 14 days ago
I use public LLM the same way I'd use a search engine - to search public information only and never proprietary data. With Gemini in Google Workspace that can be used for proprietary information, but of course only if Gemini suits the tasks.

As for coding model my view is they're directly violating my copyright, especially that I checked some "open training data" corpus that it does confirm that my code is part of the corpus without honoring the license.

rabid_turtle 14 days ago
I never put any information into an LLM that I wouldn't want publicly known. I'm okay with an LLM knowing the coding projects I am working on.
hengheng 14 days ago
So your cloud service intends to compete with on-premise hosted internal services on privacy?

We used to call that a honeypot.

underseacables 14 days ago
I asked an LLM if it would keep my data private and it said absolutely. It wouldn't lie to me would it?
rhabarba 14 days ago
> I’m building a product called Privacy AI

Why?

> what's your take on using LLMs at work?

Don't.

carlmr 14 days ago
>> what's your take on using LLMs at work?

>Don't.

The only privacy-conscious answer.

paulcole 14 days ago
Do you mean you shouldn’t use LLMs at work at all? Or avoid them only if dealing with truly sensitive data (like healthcare records)?
rhabarba 14 days ago
You shouldn’t use LLMs at work at all.
paulcole 14 days ago
More power to you but I get so much more done with them that I can’t imagine going back.
rhabarba 14 days ago
> I get so much more done with them

... poorly.

scottiebarnes 14 days ago
Does the existence of Google or Stackoverflow result in poor work, or is it the way the programmer utilizes them?
paulcole 13 days ago
Without even knowing my job you seem very sure of yourself.
axegon_ 14 days ago
Honestly? I'm somewhat OK with Mistral, in that it's a French company, therefore they need to comply (on paper anyways) with GDPR. And if they do not, there are serious consequences. But overall - locally. If it is not running on your hardware and it requires an internet connection to work, then the data is not yours. For the time being, building an AI rig at home is relatively cheap on ebay: if you play your cards well, it is doable for 1000 bucks or less.
iamnotagenius 14 days ago
[dead]