g a rich suite of features along with existing ones which Zeplinn already supports. The reason behind this move is aimed towards stability and better management of clusters. The constant notebook not running, memory outage etc, whichever happens now , will be solved permanently.
For EU cluster , click on the following link and click on single sing on. Your account will be created
For the rest 2 cluster single sign-on is not enabled, so you all will be receiving an invitation link to create an account. You can bookmark it. Google form is below.
You need to attach a cluster on the top left side to run queries.
I will list down a comparison sheet which will give some kind of essence that your comfort zone will not be disturbed (maybe a little :D)
|Supports SQL||yes||yes||Same MySQL so 0 change in style|
|Supports Python||yes||yes||All popular ML and AI libraries like pandas , matplotlib etc is supported you can directly work on read_api using python and do complex ML flows|
|FTP export possible||yes||yes||more power to end user to customize flow as the new flow is in python|
|Dynamic Variable support on queries||yes||yes||you can buid your mini bi tool|
|Scheduling of complex time interval||yes||yes||no need to provide regex , nice UI for dealing with complex time intervals|
|S3 path reading and table making||yes||yes|
|Access to all orgs in the cluster||yes||no, you need to specify the orgs which you will work on, google form below||this is in accordance that Databricks will be shared with external non-capillary teams as well|
|Everyday sync with 24 hour lag||yes||yes||Back-end is same for both|
|Custom database and tables||yes||yes||All data is stored in s3 by default thereby sorting the issue of notebook getting stuck in zeppelin when memory is full|
Below is a form which you need to fill for getting started with Databricks, you will also get a link to create an account once you fill the details.
Currently, Zeplinn notebooks which are exported as JSON cannot be directly imported into Databricks, we are working on a converter which will transform the JSON notebook to a compatible format of Databricks. Will soon update on that. Till then copy-paste needs to be done.