This is going to be a short post, which I am writing as a reminder for my future self as well as any of you reading out there.
If you are troubleshooting an error with your pipeline, especially if you work in a database system that has layered views, make sure you fully dig down through all layers of your data setup before you make a ticket with Microsoft. I learned this the annoying way over a work week while working through a ticket with Microsoft for a very strange issue I was seeing in a pipeline that uses a Synapse Serverless SQL Pool. We had checked so many things that week with no luck in changing the outcome of how the pipeline runs, and then the pipeline just went back to working when I ran it outside of its schedule.
What’s in this post
The Error
The error I was seeing made it look like the Serverless SQL Pool views, which use an OPENROWSET call to a parquet file, were referencing the wrong source file even though I confirmed multiple times that the view definitions were correct. For example, the view was written to use parquet files under TestDatabase/MyTable/** as the source files, but the error was making it seem like they were instead pulling data from TestDatabase/OtherRandomTable/** which was confusing to say the least. I thought that the Serverless node was broken or had a bug that was making the views look at “OtherRandomTable” files instead of the correct files.
The Cause
The error happened because multiple views used a CROSS APPLY to another view tied to a parquet file in a data lake, and that parquet file was being deleted and recreated by a parallel pipeline. When the failing pipeline tried to reference its views, it couldn’t find that base view because the source file had not yet been recreated by the parallel pipeline. Makes sense and is so obvious in hindsight, but it took Microsoft support directly asking me to make me realize that I had a view referencing another view, which I needed to check the definition of.
The change I needed to make was to update the pipeline triggers so that the process deleting and recreating the base view would be done making the parquet files when the second pipeline ran and tried to use those files.
If I had done my due diligence and dug through every layer of the data environment, which I am normally good at with other scenarios, I would have quickly and easily discovered the issue myself. But sometimes we need to learn the hard way because our brains aren’t running at full capacity. (It also helps that I finally had dedicated time to set aside to this problem and so wasn’t trying to multitask multiple work items at once.)
Summary
If you are troubleshooting ETL failures of any kind, make sure you dig down through all layers of the process to ensure you have checked everything possible related to your failure before reaching out to support. They’ll happily help you find what you missed, but it will save everyone time if you can figure it out yourself first.







