Microsoft Fabric: Workspace access, Item level access, SQL policy and object level security

Microsoft Fabric offers multiple layers of security and access control, ranging from Workspace-level to Item-level and, when applicable, Object-level. The access and security mechanisms at the Workspace and Item levels are similar to those that existed in Power BI workspaces. Workspace members (of certain levels) have the option to share an entire workspace by providing direct access or to share specific items, such as individual reports. In Microsoft Fabric, lakehouses and warehouses coexist within the same workspace as Power BI items and sharing them follows the same principles as Power BI items. Both a warehouse and the SQL endpoint of a lakehouse provide SQL access to data. This SQL access to data can be controlled below the Item level, enabling workspace members (of certain levels) to grant access exclusively to specific objects through SQL Policy with Object Level Security (OLS).

In this post, I explore different lakehouse access scenarios with a few test users and demonstrate how SQL Policy can be used to refine data access at object level. While I use a Lakehouse for the examples, the sharing methods and outcomes remain identical for a warehouse.

Let’s first start at the workspace.

Fabric Workspace

In Microsoft Fabric, items such as Power BI reports and semantic models (previously known as Power BI datasets), lakehouses, warehouses, and more are organized into workspaces for collaboration and sharing.

The workspace access model is straightforward. You can grant someone access to all items in a workspace for collaborative purposes, such as developers, or selectively share specific items like a report or a lakehouse. Additionally, for certain items like a warehouse or the SQL Endpoint of a lakehouse, you have the option to restrict SQL access to specific objects within the item, a feature known as Object Level Security (OLS). For instance, you can provide someone with SQL read access to only one table within a warehouse.

Workspace level access

Granting access to all items in a workspace simultaneously is achieved by assigning the individual one of the 4 core roles of Admin, Member, Contributor, or Viewer within the workspace. These roles are meant for individuals engaged in collaborative efforts on the items, such as developers, testers, or administrators. The respective workspace will then be visible in the Workspaces section of the Fabric portal. Admins/Members/Contributors get read/write access to all workspace data and items. Viewers get read access to workspace data with some restrictions. For Power BI items, viewers can open all reports and see data which can be further limited by Row Level Security (RLS) if it has been applied for the respective Power BI semantic model. For warehouses or lakehouses, a viewer only gets connect permission but cannot see any data until additional grants on data are put in place.

For a detailed explanation of these roles, refer to: Roles in workspaces in Microsoft Fabric – Microsoft Fabric | Microsoft Learn .

Item Level access

Those who need to consume content but don’t need to collaborate on the content creation, don’t need to be direct members of a workspace. For these individuals, items can be shared as needed by clicking on the share icon in front of an item in a workspace.

Once these items are shared, they will be listed under OneLake data hub in Fabric portal. This method of sharing makes sense for individuals who need to consume reports such as end users or those who need to query data with SQLs such as developers from a different workspace, who don’t need to access everything in the original workspace. For these individuals, items should be shared individually.

The following shows the options available for sharing a Power BI report. I won’t get in nuances of sharing Power BI items since they have been thoroughly documented and explored. See this for reference: Share Power BI reports and dashboards with coworkers and others – Power BI | Microsoft Learn

The following shows the options available for sharing a Lakehouse or Warehouse.

If no options are selected, users will be given connect permission but won’t be able to read any data. This means that they will see the items listed under OneLake datahub but they won’t be able to query them. This is the same experience that a user with Viewer role of a workspace will have. They will need additional grants to see data.

Read all SQL endpoint data” permission gives users the ability to read data from the SQL endpoint by using the SQL Analytics endpoint explorer for a lakehouse or Warehouse explorer for a warehouse or SQL Server Management Studio (SSMS) for either one.

Read All Apache Spark” permission allows users to read data using Spark with Notebooks or use the Lakehouse explorer.

In the rest of this post, I will focus on the lakehouse and its SQL endpoint’s SQL policy options. While I use a lakehouse for the examples, the sharing methods and outcomes remain identical for a warehouse.

At the time of writing this blog, data access with Spark cannot be restricted per object. If you grant someone access to read a lakehouse with Spark, it is an all-or-nothing scenario. As a result, if you intend to provide limited access with SQL, you should not give access with Spark, else they can go around the SQL policy.

SQL Policy

To refine read access for a Warehouse or the SQL endpoint of a lakehouse, you can impose additional restrictions through SQL policy. After defining the SQL Policy, for example through GRANT/DENY SQL commands, anyone accessing the data—whether as a Viewer of a workspace or an individual granted item-level read access to the Warehouse or SQL Endpoint of a lakehouse—will be bound by Object Level Security (OLS) as specified in the SQL Policy.

Sample Scenarios

For the sample scenarios, I use a lakehouse built on the WideWorldImporters sample data from Microsoft, including two tables: dimension_customer and dimension_employee. Not all scenarios listed have practical applications, some are only listed here to explore the effects of the order of operations. For example, scenario 4 does not yield the desired outcome. Scenario 6 achieves what the desrired outcome for scenario 6.

The following shows a summary of each scenario.

ScenarioStep 1Step 2User Results
1Share Lakehouse with Read All SQL endpoint data and Read All Apache SparkCan read all data from SQL Endpoint and Spark
2Share Lakehouse with no options selectedGRANT SELECT ON dimension_customer  Can only read dimension_customer
3Share Lakehouse with Read All SQL endpoint data DENY SELECT ON dimension_customerCan read all tables except dimension_customer
4 *GRANT SELECT ON dimension_employee  Share Lakehouse with Read All SQL endpoint data Can read all tables
5DENY SELECT ON dimension_customerShare Lakehouse with Read All SQL endpoint data Can read all tables except dimension_customer
6GRANT SELECT ON dimension_employee  Share Lakehouse with no options selectedCan only read dimension_employee
* Not a real scenario, just listed for learning

In each of these scenarios, a lakehouse is shared with users as an item by clicking on the “…” in front of it in the Fabric portal. Even though I will not explore the scenario of a Viewer member of the workspace, the experience parallels that of a user with data shared via “Share Lakehouse with no options selected.” While the viewer gains connect permission, additional permissions are necessary for SQL access.

In the remainder of this post, I will explore the above scenarios and provide screenshots of the permissions at each stage along with the user experience.

Scenario 1

In this scenario, the Adelle needs to have read permission to all tables with both SQL and Sparke as well as build permission on the default Power BI semantic model (dataset) of the lakehouse. For learning purposes, I grant Adelle the required permissions gradually so that I can show the user experience in steps. In a real scenario, I would select all options at once.

First, I share the lakehouse with no options selected.

At this point, the permissions of the lakehouse (under Manage Permissions option in front of a lakehouse item), show Read permission for Adelle.

As you will see shortly, this Read permission is not what you think. This only gives her Connect permission, but she does not have Read data permission yet.

At this point, the lakehouse, its SQL endpoint and default semantic model (dataset) show up for Adelle in OneLake data hub.

Even though Adelle can see the above items listed, she cannot use them yet because she does not have Read data permission to the items yet. The following screenshot shows what Adelle sees when she opens the Lakehouse.

The following screenshot shows what Adelle sees when she opens the SQL endpoint of the lakehouse.

Next, we can go back to the lakehouse share menu and Grant “Read all SQL endpoint data”.

NOTE AT this point, the permissions of the Lakehouse should show ReadData for Adelle in addition to Read. At the time of writing this blog, a bug seems to be in place where ReadData is not shown under the Manage Permissions interface. I will update this blog once that has been fixed. Note that this is only a display only issue and ReadData is actually in place.

For comparison purposes, this is what a warehouse’s manage permissions would show for the same exact scenario:

At this time, if Adelle opens the SQL analytics endpoint, she will see all tables listed in the lakehouse.

Next, I’ll give Adelle “Read all Apache Spark” permission. Remember that read with Spark cannot be limited to certain objects at the time of writing this post so be careful not to give this permission if you intend to limit data access to a few objects with SQL Policy.

This time the permissions of the lakehouse show Read and ReadAll for Adelle. In other words, ReadAll is the option you get when you give access to Apache Spark.

Now Adelle can query the data using the lakehouse explorer as well as Spark notebooks.

The last item left is the default Power BI semantic model (dataset). At this point, if she hovers over it, she will see an option to request access but cannot read the data yet.

Next, I’ll give her the last remaining option which is “Build reports on the default dataset”.

Alternatively, I could have given her build permission by going to the default semantic model’s permissions (click on … in the portal in front of it) and adding “Add build” to her permissions.

This completes scenario 1. In a real scenario like this, you would give all the permissions at the same time. I only gave the permissions one at a time because I wanted to show what her access state looked like in between.

Scenario 2

In this scenario, Alex needs to have read access with SQL to only one table, dimension_customer. He should not see any other table.

First, I share the lakehouse with Alex without choosing any additional options. This enables Alex to connect to the lakehouse but he will not be able to read anythign yet.

At this point, this is what Alex sees if he tries to use the SQL endpoint:

Next, I give Alex read permission to dimension_customer by running a “GRANT SELECT” SQL command.

GRANT SELECT ON [dimension_customer] TO ""

Note that unlike Azure SQL DB, you don’t need to create a user/login first for Alex. The Grant/Deny commands do that automatically.

I can check the SQL permission by running the following query:

SELECT DISTINCT pr.principal_id,, pr.type_desc, 
 pr.authentication_type_desc, pe.state_desc, pe.permission_name
FROM sys.database_principals AS pr
INNER JOIN sys.database_permissions AS pe
 ON pe.grantee_principal_id = pr.principal_id;

Note that I myself am set up as a workspace admin with full permissions, including the ability to grant and control permissions for others.

Now Alex can see dimension_customer but no other table.

Alex cannot use the lakehouse explorer since I did not give him the permissions. This is key in this scenario since if he had read access with Spark, he would have been able to query all data with Spark and hence render the SQL policy data access restriction useless.

Scenario 3

In this scenario, Diego needs to have read access with SQL to all tables except dimension_customer. This is the opposite of scenario 2.

First, I share the lakehouse with Diego with “Read all SQL endpoint data”. This will give Diego the permission to connect and query all data with SQL.

Next, I will use “DENY SELECT” to deny him access to dimension_customer.

 DENY SELECT ON [dimension_customer] TO ""

Now the SQL permissions, show a Deny Select for Diego:

This is what Diego will see in the SQL endpoint.

Note that if this was a real scneario, there are going to be a few moments, between when the lakehouse is shared with “Read all SQL endpoint data” and when his access is denied to dimension_customer, that he can actually get to dimension_customer; therefore, in a real scenario where data security is critical even for a few minutes, then the reverse order of steps should be followed. See scenario 6 for an example of how this can be done in the right order.

Scenario 4

In this scenario, Grady needs read access to only one table, dimension_employee. The steps in this scenario will not yield the desired results. He will be able to see all data. The only readon this is listed here is to show that sharing a lakehouse with Read all SQL endpoint data takes precendence over Granting Read to one object using SQL policy. This is expected behavior for the permissions and just liste here for demonstration.

First, I give Grady Select permission to dimension_employee.

Note that at this point, Grady cannot see the lakehouse and its related items in the OneLake data hub yet.

Next, I share the lakehouse with “Read all SQL endpoint data” with him.

At this point, the lakehouse and its related items show up in OneLake data hub for Grady.

The following shows what Grady sees when he queries the SQL endpoint. As you can see, he can read all data which is not the desired outcome but we expected this.

Lesson learned: Grant Select on one object + Read All SQL endpoint data = User can read all data with SQL

See scenario 6 for an example of how “Deny Select + Read All SQL endpoint data” works.

Scenario 5

In this scenario, Henrietta needs read access to read all data except dimension_customer. This is similar to scenario 3 but the steps are done in reverse order so that there is not gap between read all objects access and when read is limited to one object.

First, I Deny Select to Henrietta from SOL policy.

DENY SELECT ON [dimension_customer] TO ""

Then I share the lakehouse with Henrietta with Read All SQL endpoint data.

At this point, the lakehouse and its related items show up in the OneLake datahub.

Henrietta has read access to all table except dimension_customer.

Lesson learned: “Deny Select” + “Share all SQL endpoint data” –> User can read all data except the object denied access to. In other words, ” “Share all SQL endpoint data” does not cancel the SOL policy. The steps taken in this scenario, are practical for cases where data should not be shared even for a few moments (scenario 3 did that).

Scenario 6

In the last scenario, Isiah needs to only read dimension_employee and no other table.

I run a Grant Select SQL command as shown below to give Isaiah read access to just one table.

GRANT SELECT ON [dimension_employee] TO ""

Next, I will share the lakehouse without selecting any options. This will give Isiah connect permission.

This is what the lakehouse permissions look like at this point. Note that due to a bug, ReadData is in place for users that I shared the lakehouse with them with “Read all SQL endpoint data”; however, it is not shown in this interface. This is a temporary display issue that should be resolved soon.

Isiah can connect to the SQL endpoint and sees only dimension_employee.


Microsoft Fabric already provides sound mechanisms for securing and sharing data at multiple levels starting from workspace to item to object level. Beyond object level, column and row level security are available as of September 2023. I will explore column and row level security in a future post. In its current state, Microsoft Fabric has sufficient sharing and security mechanisms for an enterprise scale lakehouse/warehouse implementation.

At some point in future, OneLake security will be added which will allow table/folder security in Onelake. Onelake security will enabe object level security with Spark and beyond.

One thought on “Microsoft Fabric: Workspace access, Item level access, SQL policy and object level security

Leave a Reply