This blog helps you to write spark output to a single file.
Using df.coalesce(1) we can write data to a single file,
result_location = "dbfs:///mnt/datalake/unmesha/output/" df.coalesce(1).write.format("csv").options(header='true').mode("overwrite").save(result_location)
but still you will see _success files.
We are going to achieve this using dbutils
result_location = "dbfs:///mnt/datalake/unmesha/output/"
df.coalesce(1).write.format("csv").options(header='true').mode("overwrite").save(result_location)
files = dbutils.fs.ls(result_location)
csv_file = [x.path for x in files if x.path.endswith(".csv")][0]
dbutils.fs.mv(csv_file, result_location.rstrip('/') + ".csv")
dbutils.fs.rm(result_location, recurse = True)