Over the past era, subgraph mining from a large collection of graph database is a crucial problem. Existing works on subgraph mining is based on the threshold value, which returns similar graphs for a query graph. However, the number of graphs in the answer set for the same threshold can vary in terms of structure and context. In addition, scalability is another big problem due to insufficient storage.Further, it can suffer from security issues. In a distributed environment, since more queries come from different users, it is possible for the attackers to access the graph. These three problems highly exist in the current subgraph mining. To address this downside, our proposed work introduces aBlockchain-based Triune Layered Architecture for authenticated query search in the large scale dynamic graphs. The two fold process is handled in BTLA-LSDG: graph indexing and authenticated query search (query processing), which are implemented in triune layers (Data Generation Layer, Data Storage Layer, and Service Layer). Initially, data owners are authenticated to blockchain using Four-Q-Curve algorithm. The graph index is constructed by data owners and merged graph index is constructed by service providers. Based on the uploaded graph index, hash index is constructed using SHA-3. The hash index is fed into blockchain and form Dendrimer -Fractal Index. On the other hand, data user submits query with authentication. For every authenticated query, the four fold process is handled. Firstly, Multi-Constraint based Belief Entropy function is used for feature sets computation for a given query. Then Dual Similarity based MapReduce helps inmapping and reducing the relevant subgraphs with the use of optimal feature sets. Thirdly, Recurrent Neural Network (RNN)is used for subgraph isomorphic testing. Finally, graph index refinement process is undertaken to improve the query results. For this, query error is computed for each user's query at the end of retrieval. With respect to query error, fuzzy logic is used to refine the index of graph dynamically. This experiment is implemented using Hadoop environment and the results show better efficiency in terms of Scalability, Security and Storage. Further, it is tested with Precision, Recall, F-measure, Accuracy, Error Rate, Query Response Time and Positive Results.