Release notes

March 23rd, 2020

NewData model versions

To help you maintain your dataset queries and align them with changes in underlying data models, we have introduced data model versioning.

Data models determine the nomenclature for all elements that are included in a model, define all entities and their properties, as well as relationships between different entities. When there is a new version of a data model, the structure of these elements and their relationships can change, thus rendering previously saved queries non-runnable.

To make the changes more visible and help you rebuild your queries, data model versions are displayed at various locations in the Data Browser for each of the available datasets, as well as the update history. Learn more

Recently published apps

DESeq2 was updated to the latest version and upgraded to CWL1.0.

Read more

March 17th, 2020

NewReal-time job monitoring / Instance metrics [BETA]

With these new features we are providing additional job monitoring and debugging tools, which are readily accessible for each task during its execution. You are now able to monitor execution progress in a more extensive way and find out the reasons for stuck, prolonged, or misconfigured jobs on your own, while we continue to ensure the same level of reproducibility.

Real-time job monitoring

As long as the instance on which the job was running is active, you can now access the following information about the job execution environment:

  • Standard output and error streams.
  • Workspace directory content and structure.
  • Head/Tail preview of text file formats for files in the workspace directory.
  • Instance metrics for the duration of the job.

Read more

Instance metrics

The Platform now also lets you access instance metrics information for all instances used in task execution. This information is available during task execution and for 15 days after the task has been executed. Read more

Read more

March 9th, 2020

NewData Cruncher multiple environment setups

When selecting your Data Cruncher environment, you are now able to chose between different environment setups for your JupyterLab or RStudio analysis. Each environment setup is a set of preinstalled libraries that is available every time an analysis is started and is intended for a specific purpose.

For this first release, we are enabling support for machine learning use cases using CPU or GPU instances. Learn more.

Read more

March 2nd, 2020

ImprovementsTask validation allows additional characters in file names

We updated task validation to allow ~ (tilde) and # (hash) characters in input file names. When executing tasks on the Platform, you will be able to use input files containing those characters in their names.

Recently published apps

The following additions were made to the Public Apps Gallery:

  • FastQC was updated to CWL1.0 and had it’s version bumped as well to the latest one.
  • The seq2HLA tool was published in CWL1.0 as well. This tool does HLA typing from RNA-Seq data.
  • Sleuth 0.30.0 was published in CWL1.0. This tool does differential expression analysis with results coming in from any of our favorite pseudoaligners (i.e. Salmon or Kallisto). The tool is wrapped in accordance with our existing differential expression tools (like DESeq2), so the user experience should be similar.
  • Kallisto/Salmon Sleuth workflow was published, as a CWL1.0 app as well. This workflow is meant to provide end-to-end read quantification + differential expression solution, starting from FASTQ files and generating a differential expression report with Sleuth. The FASTQs are processed beforehand with either Kallisto or Salmon (as chosen), allowing for ultra-fast transcript quantification, so that Sleuth analysis can proceed in the fastest time possible.
Read more

February 10th, 2020

Recently published apps

The following apps had their versions updated and were bumped to CWL1.0:

  • STAR
  • STAR Workflow
  • STAR-Fusion
  • Deep Variant

 

Read more

February 5th, 2020

ImprovementsDocker repository management improvements

With this release of Docker Repository Management improvements, we provide our users with a better user interface, as well as additional functionalities. The new Docker registry section under the Developer tab is introduced to:

  • Provide more details on Docker images, such as tag, size, image ID, SHA digest, time of last update.
  • Provide logs for all push and delete actions over a repository.
  • Provide example command lines for docker login, push and pull commands.
  • Allow users to delete a Docker repository or delete an image by tag.
  • Allow users to manage Docker repository membership and permission level.
  • Allow users to create new empty repositories. This means that users can now create a Docker repository and mark it as private before pushing any content (images) to the repository.

Additionally, we changed the default privacy setting for Docker repositories to private. This change will only affect newly created repositories (when images are pushed for the first time to a non-existing repository). Privacy settings for existing Docker repositories remain the same as before.

For non-division users:

This release also introduces more strict repository naming. Users were previously allowed to push images under their username, but also to replace the “.” character with the “_” character and vice versa. Starting with this release, naming of the docker repositories must be concordant with usernames. Existing Docker images will, of course, be preserved in order to keep the functionality of existing workflows. For example: My username is rosalind.franklin and I want to push an ubuntu image. I will name my docker repository images.sbgenomics.com/rosalind.franklin/ubuntu.

 

 

Read more

December 30th, 2019

ReleaseGDC Datasets version update

As of December 27, GDC datasets available through the Data Browser and the API correspond to GDC Data Release 21.

Recently published apps

Tabix 1.9 toolkit was updated to CWL1.0.

 

Read more

December 17th, 2019

Bulk moving of files and folders via the API

In order to help you further optimize your API usage and the number of calls required to organize files and folders within projects, we have introduced the option of moving files or folders in bulk from one project location to another. Bulk move is aimed at improving API usage and user experience in general for all users who use the API to run analyses at scale. For more information, please see the related documentation pages:

Recently published apps

The following toolkits had their versions updated and were bumped to CWL1.0:

  • SnpEff
  • Samtools
Read more

November 25th, 2019

ImprovementsPassword policy changes on the Platform

In order to further increase the security of the Seven Bridges Platform and user data, we have enforced a stricter password policy. Specifically, the changes include:

  • After 5 unsuccessful login attempts, you will be locked out of your account.
  • When changing your password, you will not be able to set a previous password if it was already used in the past 365 days.

NewPlatform status page now available at status.sevenbridges.com

In order to enable easier communication of real-time Platform status, we have introduced the Platform status page at status.sevenbridges.com. The page covers the current state of several functional parts of the Platform, as well as our public website and the Platform’s documentation website. There is also an overview of past incidents and a detailed view of the complete incident history.

Read more

We are always engaged in research and development, working to build the future of genomics, science, and health. Let's work together. We'd love to hear about your projects and challenges, so drop us a line.

get in touch