I have one databricks streaming table which is updated by kafka as append mode and it running in continuous mode in dlt.
I have to use same kafka connection in another dlt so I though instead of creating new streaming table can I use already created streaming table.
But getting error
Dataset defined in the pipeline but could not be resolved.
I am reading already existed streaming table like below
df = spark.readStream(“Streaming_table_Name”)
Vishwajeet Mane is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.