Migration to Databricks from Zeplinn

g a rich suite of features along with existing ones which Zeplinn already supports.  The reason behind this move is aimed towards stability and better management of clusters. The constant notebook not running, memory outage etc, whichever happens now , will be solved permanently. 

For EU cluster , click on the following link and click on single sing on. Your account will be created 


For the rest 2 cluster single sign-on is not enabled, so you all will be receiving an invitation link to create an account. You can bookmark it.  Google form is below.

You need to attach a cluster on the top left side to run queries. 

I will list down a comparison sheet which will give some kind of essence that your comfort zone will not be disturbed (maybe a little :D)

Supports SQLyesyesSame MySQL so 0 change in style
Supports PythonyesyesAll popular ML and AI libraries like pandas , matplotlib etc is supported you can directly work on read_api using python and do complex ML flows
Supports Ryesyes 
FTP export possibleyesyesmore power to end user to customize flow as the new flow is in python
Dynamic Variable support on queriesyesyesyou can buid your mini bi tool
Scheduling of complex time intervalyesyesno need to provide regex , nice UI for dealing with complex time intervals
S3 path reading and table makingyesyes 
Access to all orgs in the clusteryesno, you need to specify the orgs which you will work on, google form belowthis is in accordance that Databricks will be shared with external non-capillary teams as well
Everyday sync with 24 hour lagyesyesBack-end is same for both
Custom database and tablesyesyesAll data is stored in s3 by default thereby sorting the issue of notebook getting stuck in zeppelin when memory is full

Below is a form which you need to fill for getting started with Databricks, you will also get a link to create an account once you fill the details. 


Currently, Zeplinn notebooks which are exported as JSON cannot be directly imported into Databricks, we are working on a converter which will transform the JSON notebook to a compatible format of Databricks. Will soon update on that.  Till then copy-paste needs to be done. 

Login to post a comment