

Can You Trust AI with Your Research Data? Exploring SciSpace’s Privacy & Safety Practices
Data safety and privacy remain the #1 concern for researchers exploring AI tools. In this session, we’ll walk through exactly what happens when you upload drafts, notes, or papers into SciSpace. You’ll learn where your data is stored, who (if anyone) can access it, and whether it is ever used to train models — separating myths from facts.
We’ll then explore the safeguards in place: encryption protocols, secure storage, and SciSpace’s approach to institutional and journal compliance. Speakers will address how SciSpace handles sensitive content and how to stay aligned with academic integrity standards.
Finally, we’ll cover the boundaries and provide best practices for safe, responsible use of AI in your research workflow.
This is a practical, clarity-driven session designed to help researchers use AI tools with confidence and trust.