You have a Fabric tenant that contains a new semantic model in OneLake.
You use a Fabric notebook to read the data into a Spark DataFrame.
You need to evaluate the data to calculate the min, max, mean, and standard deviation values for all the string and numeric columns.
Solution: You use the following PySpark expression:
df.explain()
Does this meet the goal?
Shonda
6 months agoMiriam
7 months agoTheron
7 months agoDominga
7 months agoShonda
7 months agoTheron
7 months agoMirta
8 months agoAnabel
8 months agoIvan
8 months agoWilliam
8 months agoBen
8 months agoKathrine
8 months agoStephaine
8 months agoSalena
8 months agoLourdes
8 months agoShaun
8 months ago