You can read files into Dataframe and write out in delta format
Step 1 : Read the input csv
Step 2 : Write the csv to ADLS location using Delta format
Step 3: Create a table on top of it
myCSV= spark.read.csv("/path/to/input/data",header=True,sep=",");
myCSV.write.format("delta").mode("overwrite").option('overwriteSchema','true').save("/mnt/delta/Employee")
spark.sql("CREATE TABLE employee USING DELTA LOCATION '/mnt/delta/Employee/'")