I am new to databricks and MS SQL.
I was able to read the necessary table from MS SQL to databricks as following:
driver = "com.microsoft.sqlserver.jdbc.SQLServerDriver"
database_host = "MYSERVERNAME"
database_port = "1433" # update if you use a non-default port
database_name = "MYDATABASENAME"
table = "MYTABLENAME"
user = "MYUSERNAME"
password = "MYPASSWORD"
url = f"jdbc:sqlserver://{database_host}:{database_port};database={database_name}"
remote_table = (spark.read
.format("jdbc")
.option("driver", driver)
.option("url", url)
.option("dbtable", table)
.option("user", user)
.option("password", password)
.load()
)
display(remote_table.select("runid"))
However, I am struggling to actually use this table on databricks. How can I save this table into databricks so I can work with this table? I tried different codes on web but I have’t been successful. If you can provice me with example, not just a code I would be really appreciated..
I want to be able to save this table into databricks so I can recreate another table from mssql table using queries.
I tried code from https://docs.databricks.com/en/connect/external-systems/jdbc.html and https://learn.microsoft.com/en-us/azure/databricks/connect/external-systems/jdbc but it did not work.
Mayby I am new to coding, I need to see actual realistic example of how to write code using my real table inforamtion.
김서연 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
1