Can you describe the full sequence of steps, or post the MaxL script you are using?
In general when you load data to ASO, even without explicitly creating buffers, the following things happen:
If something goes wrong during step 2, the buffer will be destroyed but Essbase may still attempt to commit, producing the error you are seeing. Check the application or MaxL log for the data load and see if it contains more data.
It's also possible that you're explicitly trying to commit a buffer that doesn't exist. But 9 times out of 10 this is something else going wrong during the load. Bad file path, bad data source, inaccessible SQL source, bad load rule etc.
And specifically, it's a problem with a SQL load rule. Check the query and connection in the rule, look at ODBC setup to confirm the connection and credentials (although I'd expect a more specific message if login fails) are valid, extract and run the SQL in the load rule against the source database via a native database tool etc.
first take the extarct of data from view and load via notepad or excel with rule file and see the diff then check your data column in rule file .. some time while loading data from Sql views data column name changes ..
another you can distroy load buffer id but this would be least option
Maxl: Query database ‘App’.’Database' list load_buffer; Delete those which you see
MAXL : alter database 'App'.'database' destroy load_buffer with buffer_id 1;
Are you loading the exported data file fom the ASO cube , if yes use maxl to load the data file by placing the data file in the cube folder
import database 'Test'.'OLAP' data from server text data_file 'data.txt' on error abort;
In the scenario place the data.txt file in the below path