Explore Charting Possibilities

Follow New post
0

The SQL Server code you supplied is producing an excessively large file.

I submitted the SQL Server code (see below) to our target database and the DBA killed the job when the output file got to 300,000 rows.  Is it typical for the output file to get that large and can it be successfully imported into Lucidchart with 300,000+ rows?

Here is the SQL code we used:

 

SELECT 'sqlserver' dbms,t.TABLE_CATALOG,t.TABLE_SCHEMA,t.TABLE_NAME,c.COLUMN_NAME,c.ORDINAL_POSITION,c.DATA_TYPE,c.CHARACTER_MAXIMUM_LENGTH,n.CONSTRAINT_TYPE,k2.TABLE_SCHEMA,k2.TABLE_NAME,k2.COLUMN_NAME FROM INFORMATION_SCHEMA.TABLES t LEFT JOIN INFORMATION_SCHEMA.COLUMNS c ON t.TABLE_CATALOG=c.TABLE_CATALOG AND t.TABLE_SCHEMA=c.TABLE_SCHEMA AND t.TABLE_NAME=c.TABLE_NAME LEFT JOIN(INFORMATION_SCHEMA.KEY_COLUMN_USAGE k JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS n ON k.CONSTRAINT_CATALOG=n.CONSTRAINT_CATALOG AND k.CONSTRAINT_SCHEMA=n.CONSTRAINT_SCHEMA AND k.CONSTRAINT_NAME=n.CONSTRAINT_NAME LEFT JOIN INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS r ON k.CONSTRAINT_CATALOG=r.CONSTRAINT_CATALOG AND k.CONSTRAINT_SCHEMA=r.CONSTRAINT_SCHEMA AND k.CONSTRAINT_NAME=r.CONSTRAINT_NAME)ON c.TABLE_CATALOG=k.TABLE_CATALOG AND c.TABLE_SCHEMA=k.TABLE_SCHEMA AND c.TABLE_NAME=k.TABLE_NAME AND c.COLUMN_NAME=k.COLUMN_NAME LEFT JOIN INFORMATION_SCHEMA.KEY_COLUMN_USAGE k2 ON k.ORDINAL_POSITION=k2.ORDINAL_POSITION AND r.UNIQUE_CONSTRAINT_CATALOG=k2.CONSTRAINT_CATALOG AND r.UNIQUE_CONSTRAINT_SCHEMA=k2.CONSTRAINT_SCHEMA AND r.UNIQUE_CONSTRAINT_NAME=k2.CONSTRAINT_NAME WHERE t.TABLE_TYPE='BASE TABLE';

Official comment

Avatar

Hi Michael, 

Thanks for posting in the Lucidchart Community! Just to make sure we’re on the same page, are you trying to import your ERD from SQL Server? If so, it looks like you have run the right query on your database. Please note that the DBMS directly supported by Lucidchart for ERD import are PostgreSQL, MySQL, SQL Server, and Oracle. Each of these has its own Lucidchart provided query. When ran on your DBMS, those queries generate a .csv, .tsv or .txt file, which means that the output file depends on the structure of your database (e.g., the number of entities and attributes). My guess is that your database structure is massive and the vast number of rows in your output file is a result of this large scale. 

Unfortunately, I’m afraid that there is nothing we can do on our end to get that file generated by your database query. Additionally, if you get to generate the output file that large, it's unlikely that you will be able to successfully import it into Lucidchart. As an alternative, you may try to modify the query provided so that your database only pulls the pieces of information that is relevant to you (probably a small subset of the entire database) and then return with fewer results on the output file. 

For more detailed information on how to import your ERD into Lucidchart, please review our Entity Relationship Diagrams (ERDs) article and let me know if you have any questions about the intended functionality of this feature. Hope this helps!

Comment actions Permalink

1 comment

Please sign in to leave a comment.