Blockchain is an example of distributed ledger technology (DLT). These technologies provide infrastructure which enables us to construct a rich directory of information with many benefits. For example, with DLTs there is no centralised control of information or point of failure. Information is also immutable and easily tracked, and is made machine readable for automation by AI.
This version of the internet was originally described by Sir Tim Berners-Lee when he invented the worldwide web. We are now entering this next phase of the web — known as the “Semantic web” or “Web3” — with the advent of DLTs.
The aspect of blockchain that makes it more energy intensive than current centralised data storage solutions is the consensus mechanism used (consensus is how the nodes in the system reach an agreement). The biggest reason blockchain is bad for the environment is due to the proof-of-work (PoW) mechanism (how Bitcoin reaches consensus). With this mechanism, “miners” compete against each other with computing power. Other consensus mechanisms exist which use a fraction of this energy (~99.95% less), for example proof-of-stake (PoS) or proof-of-authority (PoA), where instead of all nodes competing for each block, a miner (or group of miners) are chosen to resolve each block.
We are currently testing our technological stack on a minimal carbon impact PoA platform while we wait for PoS to emerge for our public net deployment.
Additionally, there is a large amount of redundancy in current scientific protocols and practices, which our transparent and open ecosystem aims to substantially reduce, for example through the transparent publishing of all results regardless of their statistical significance, and the sharing of interoperable datasets.
We intend to provide a dashboard in the future estimating how much carbon our solution saves vs. how much it uses, to transparently demonstrate the information outlined above.
We also offset the carbon generated by our company itself through Offsetra.
For our initial solution, we’re integrating DataLad, in addition to the ability to link to your existing digital identities (such as Google Scholar and Orchid ID). We will eventually integrate more analysis, visualisation and academic tools into our solution.
Trust among community members is vital for collaboration. The global pandemic has led to increasingly online methods of scientific collaboration. We are more reliant than ever on technology to collaborate with our peers.
We are passionate about fostering healthy digital science communities for a global inclusive culture of science, which allows people to interact with not only their traditional peers (e.g. in their institution) but also with those who have been historically excluded from collaboration due to geographical location and access to resources.
With Web3 tools, trust becomes publicly verifiable. This creates a new kind of trust, where communities can easily reach consensus on their shared realities and experiment with interpersonal trust in ways previously unimaginable. If you are interested in these concepts, please visit Kernel’s lesson on Trust to learn more!
To begin with, our solution will accept BIDS compliant Neuroscience data formats, including EEG, iEEG, MEG and MRI. We will also support Experiment Factory containers written in Lab JS.
In the future our solution will also accept standard data types from other scientific fields.
Our solution will work in a similar way to “Git” version control. This means that any time you make a change to your data, previous versions will be saved and logged, so you can go back to any old version of your data without needing to explicitly save along the way.
All of our applications will run in the browser, with the heavy lifting done in secure cloud computing, so this shouldn’t be an issue.
Yes, your data remains yours, and you are able to publish and share this elsewhere.
Data can be anonymised in various forms. Raw data will be kept local when possible. If needed, large raw datasets will be securely stored on decentralised file storage networks. Methods such as Shamir Secret Sharing (sharding) and homeomorphic encryption will allow others to compute on the data without it being revealed.
Our tech stack will enhance the data audit paper trail, allowing institutions to bypass their GDPR exception status for research data and comply more closely with this law. We encourage researchers to share your data with Opscientia if you are seeking low friction compliance with GDPR.
As for your institution, we are working on establishing an inter-institutional sharing infrastructure. If you would like your organisation to be included in this growing network, please fill out this survey.
Existing consent forms may not have the language that would enable you to release the data on peer-to-peer networks. However, if you still have contact with the participants, in the future you will be able to create a Science Quest on our platform, and reissue an e-consent form that is generic, and allows the participant to provide their consent on how their data is used in real time.
You will be able to generate boilerplate text that may be used in your ethical review application and consent forms when a project is registered on our platform.
We encourage you to deploy your experiments through our Science Quest portal (when this is live). However, templates for our e-consent forms will be open source and made available to researchers that would like to deploy their experiments outside of our ecosystem.
According to GDPR regulations, participants reserve absolute ownership over their raw data. Universities are considered the Data Controller. In many cases, universities have to apply for exception status (based on the limitations of the infrastructure they have for scientific research). Our platform will reduce the burden for institutions to comply with GDPR regulation by providing participants the ability to consent in and out of the study in real-time.
Our services provide a separation between ownership of data and performing computation on data. If access to data is revoked, either by a participant or researcher, then no one will be able to run any computation on the data anymore. However, if a distinct transformation from the data has been generated (that cannot be used to identify the data source), then it will belong to you as a researcher, and may continue to be used.
We will soon launch our project submission and governance portal for scientific research! If you are interested in applying for funding, please fill out this survey and we’ll be sure to notify you as soon as our portal is live.
High quality research will be ensured through our platform by a number of processes and tools.
All experiments and code will be open source, enabling peer-review (by the community) to be embedded throughout the research life cycle.
Initially, all experiments must be preregistered. As part of this process, researchers will gather feedback on their protocol before their experiment can be initiated. Additionally, researchers must define their protocols and code during preregistration — with initiation only possible once code is deemed executable.
All contributions made by individual researchers, and feedback received from the community, will be open for all to see. This enables reputation systems to be created, and we are exploring innovative ways of using this to ensure quality. If you’d like to get involved in this exploration, please introduce yourself on the Reputation Working Group channel in our Discord!
Yes. Each researcher will need to set-up an individual account to join their lab.
Contributions will be logged open source, ensuring transparency. The allows for all fine-grained contributions to be logged in the project metadata (e.g. data cleaning). Order will be determined according to the platform, based on details of the project (e.g. whether it is an extension/fork of a previous project) and researcher contributions.
Researchers will own the intellectual property (IP) of any project submitted through our platform, meaning that research can also be published elsewhere.
We aim to provide training for researchers to carry our high quality research with our platform. In the future, we also aim to provide automation and suggestions to assist researchers in achieving their aims.
If you permission your data to be shared, and your data is consumed, you will be rewarded in platform credits. In the future, you will also receive credits based on your research contributions.
Credits are an abstraction of our underlying tech stack, which is built on blockchain/Web3 technologies. You will be able to use credits within our platform to access decentralised cloud services (storage and computation) and for crowdsourcing experiments.
We will curate public datasets and ensure they are always available and free through any basic account on our platform. However, our platform will also make it possible for other researchers, organisations and institutions to make their data available at a price they set (i.e. in exchange for a certain number of credits).
Credits can be used within our ecosystem, e.g. for data storage and crowdsourcing experiments. Credits will always be stable within our platform. In the future, we will also award grants for research — these will be paid in supported national currencies (e.g. SGD, USD, EUR).
DISCLAIMER: We are performing rigorous tests for security, legal compliance and privacy as we iterate on our products. These tests are heavily influenced by your curiosity and feedback. Please check back regularly as we update this page as our tech stack evolves. If you have any additional questions or would like to provide feedback, please fill out this form.