If migration / keep data up2date easier if you store data from different sources separate I may leave them this way. But when you query from the application I would create spatial views and merge/union data...
Functionally it does not make sense to store these three features in one single table. They are very different things, so probably have different attributes (for example: an administrative area will most likely not have a street name). Therefore, if you want/need to keep the attributes besides the geometry, store them in three different tables.
It is possible to store everything in one table, but it makes things harder to understand. Keep it simple, and you can keep the overview.
The question is different...
I have same features from different data source.
Admin areas are provided by four different data sources and all four sources will have same attributes (columns)
The question is whether to store admin areas in four different table according to data source or store in single table
Ah. In that case, I would add a column that signals the source, and keep everything from one feature in one table. Do be careful of overlapping information though, that might cause unexpected results. But one table per feature, with a column that tells you where the information came from (most organizations I work for do it that way, even adding accuracy/reliability and start- and enddate columns).
It depends on your business needs, your requirements, the expected results. Usually you do not want duplicate geometries, unless they signify different features (for example: the area of a municipality could be the same area that a fire brigade covers, so then you might want two records - because they would have different attributes but the same geometry).
If you have to do analysis that depends on the number of features inside other features, then having duplicates will give you wrong results. Without knowing the requirements and the business rules you have to work with, it is difficult to make good recommendations with respect to your data model.
In most cases however you do not want duplicates of any form, and you want to use the best possible quality of data for your goals. So in your case I would determine the quality of the different datasets, and keep the best quality, discarding the duplicates of lower quality. This would then give you one single dataset that you could use for your analysis.
So in short: determine what you need, and what you need it for (and how long you need it, whether there will be frequent updates, etc. etc.). Then clean up your data according to those needs and requirements.
But with the little information you gave so far, it is very difficult to give good reccommendations.