Thursday, July 30, 2009

postgres - xml in xml out (part 1 of 2)

I will be using familiar sample data and in fact will begin with the assumption that the data is already in postgres.

Hark back to the 'sample data' postings and use COPY amd_bang_per_watt FROM ... if you are coming to this fresh.

I will deal with 'xml out' first as it is a quick win for an already populated table (my situation)

Previous postings have been quite console - this one will be a bit more gui so as to allow us to explore features of the pgAdmin graphical tool.

There are several ways including:

  1. Using psql with the html flag and then munging the html into xml
  2. pgAdmin's table view, clicking 'context menu', and outputting with xml radio button
  3. pgAdmin 'Tools/Query Tool' then 'File/Quick report' (xml radio button selected)
I will ignore the first option (psql) and go for the pgAdmin methods, illustrating with some screenshots.

(Item 3 gave me the best results so feel free to skip ahead if you like)

My pgAdmin screenshots are wide and will not resize well for newspaper style columns. Instead I have put them on picasaweb with links in the text.

2. Tools/Query Tool then File/Quick report:
Selecting 'Reports' from the context menu above gives you six options:
  1. Properties report (an extract is shown below)
  2. DDL report (ddl for table recreation from a script)
  3. Data Dictionary report (an accurate description of what you get)
  4. Statistics Report (Sequential scans count is expectedly high in here)
  5. Dependencies Report (for our table this is empty in an xml style of emptiness)
  6. Dependents Report (whoops - incorrect spellcheck complaint in firefox)
(i) Properties report gives you xml - extract below:
<table>
<columns>
<column id="c1" number="1" name="Property" />
<column id="c2" number="2" name="Value" />
</columns>
<rows>
<row id="r1" number="1" c1="Name" c2="amd_bang_per_watt" />
<row id="r2" number="2" c1="OID" c2="16386" />
<row id="r3" number="3" c1="Owner" c2="postgres" />
<row id="r4" number="4" c1="Tablespace" c2="pg_default" />
<row id="r5" number="5" c1="ACL" c2="{postgres=arwdxt/postgres,someuser=arwd/postgres}" />
<row id="r6" number="6" c1="Primary key" c2="" />
<row id="r7" number="7" c1="Rows (estimated)" c2="50" />
<row id="r8" number="8" c1="Fill factor" c2="" />
<row id="r9" number="9" c1="Rows (counted)" c2="50" />
<row id="r10" number="10" c1="Inherits tables" c2="No" />
<row id="r11" number="11" c1="Inherited tables count" c2="0" />
<row id="r12" number="12" c1="Has OIDs?" c2="No" />
<row id="r13" number="13" c1="System table?" c2="No" />
<row id="r14" number="14" c1="Comment" c2="" />
</rows>
</table>

Click the links in the original six point list for googledocs of all the xml files in full.

What I did not find in any of the six reports was the actual row data from the table - will come to that soon.

3. Tools/Query Tool then File/Quick report:

My pgAdmin screenshots are wide and will not resize well. Instead I have put them on picasaweb with links listed below:

So to get the actual row data my approach was to enter the Query Tool then write a 'select * from ...' type of query and Quick Report the results.

Having selected XML in Quick Report you will obtain the actual row data from the table.

Does this row data look the same as mysql --xml style output?
Not exactly. What happens in pgAdmin xml output is that columns are not treated as individual nodes but instead are attributes of a row node.

Sample first row from the full xml output:
<row id="r1" number="1" c1="X2 II " c2="550 Black Edition " c3="3.1"
c4="2x512k " c5="6MB " c6="2" c7="1.15-1.425 " c8="AM3 " c9="80"
c10="45nm Callisto Q3-2009 " c11="38.75" />

Is this a valid way of representing xml? Certainly.
Will it lead to parsing issues? - No. Any parser worth its salt handles nodes and attributes and exposes them in a useful way to the programmer.

Perhaps you might later use python-libxml2 or similar to construct an insert statement of the form INSERT INTO amd_bang_per_watt VALUES(_,_, from the column entries c1,c2,etc.

This 'more compact' format should save you space and may be doing you a favour if you harbour any thoughts of parsing the rows in jQuery or PHP.

One thing I should point out is that this compact form comes at a price. Should you later insert or remove columns from the table, then the column numbers c1, c2, c3, etc are going to be out of step with your altered table.

The facilities for xml output in your database can be useful, but are not always the quickest route to goal. If you data is going out, then into another postgres database then you would likely want to use the backup option in pgAdmin.

Example of such backup output are shown below:
I found the second backup format a refreshing alternative to some of the issues that can otherwise creep in to comma separated data.

xml in xml out (part 2 of 3) is the next post and will continue this theme.

Thursday, July 23, 2009

ctas for mysql and postgres - create table as select ...

'Create table as select...' is a useful way of defining a new table based on some stuff already in your database.

It might be that you have just started work on an existing system and your fresh eyes can see a table that can be normalised, or perhaps you are told that some of the tables are bloated with obselete columns.

You might use 'create table as select' (CTAS) in the same way you might rework a file on your filesystem by going through copy, edit, move (where the final move replaces the original)
[ So what you did on the way was create a temporary copy and lastly overwriting the original ]

Completing the analogy - in database terms CTAS might create your temporary table and you might look at how to change the schema, so as to ignore the original copy and use your CTAS created table in its place.

Temporary tables for whatever purpose can be created using CTAS or the lesser preferred 'select into'. From postgres documentation: CREATE TABLE AS is the recommended syntax.




The relevant portion of the mysql syntax is
select_statement:[IGNORE | REPLACE] [AS] SELECT ...   (Some legal select statement)
which is not so easy to find amongst the complex syntactical explanation on the main documentation page.

For perl types who have csv data on the filesystem and want to use CTAS look at DBD::CSV module.

And now for some simple examples using mysql and the sample data from the recent postings:

My suggestion is to just do a standalone select first:

select model,speed_power_ratio,process_comments from amd_bang_per_watt where model not like '%¹%' and process_comments not like '%¹%' order by speed_power_ratio;

which gives this output (abbreviated)

+---------------------+-------------------+---------------------------------------------+
| model | speed_power_ratio | process_comments |
+---------------------+-------------------+---------------------------------------------+
| 550 Black Edition | 38.75 | 45nm Callisto Q3-2009 |
...
| 9950 | 18.57 | 65nm B3 |
+---------------------+-------------------+---------------------------------------------+
34 rows in set (0.00 sec)

By doing such a check that you get what expected, you can avoid
messing around dropping and recreating your target table.
My target table will be named amd_bang_per_watt_sumary and here is the create table as select (CTAS):

mysql> create table amd_bang_per_watt_summary as select model,speed_power_ratio,process_comments from \
-> amd_bang_per_watt where model not like '%¹%' and process_comments not like '%¹%' order by \
-> speed_power_ratio desc;
Query OK, 34 rows affected (0.01 sec)
Records: 34 Duplicates: 0 Warnings: 0

In order to form the sql statement above I needed to do a bit of UTF8 looking up so that the 'superscript one' could be included correctly in the like bit of the statement.

How I pasted 'superscript one' onto the command line is to use charmap application and navigate to 'Latin-1 supplement' where I can see 'superscript one' (0xB9) amongst the superscript numerals.

mysql> select count(*),substr(process_comments,1,4) as nm from amd_bang_per_watt_summary group by nm;
+----------+------+
| count(*) | nm |
+----------+------+
| 15 | 45nm |
| 19 | 65nm |
+----------+------+
2 rows in set (0.00 sec)

mysql> select * from amd_bang_per_watt_summary where speed_power_ratio > 27;
+--------------------+-------------------+---------------------------------------------+
| model | speed_power_ratio | process_comments |
+--------------------+-------------------+---------------------------------------------+
| 550 Black Edition | 38.75 | 45nm Callisto Q3-2009 |
| 705e | 38.46 | 45nm Heka Q3-2009 |
| 905e | 38.46 | 45nm Deneb Q3-2009 |
| 545 | 37.50 | 45nm Callisto Q3-2009 |
| 900e | 36.92 | 45nm Deneb 2009 |
| 945 | 31.58 | 45nm Deneb (part no HDX945WFK4DGI) Q3-2009 |
| 720 Black Edition | 29.47 | 45nm Heka Q1-2009 |
| 9100e | 27.69 | 65nm B2 |
| 9150e | 27.69 | 65nm B3 |
| 910 | 27.37 | 45nm Deneb Q1-2009 |
| 710 | 27.37 | 45nm Heka Q1-2009 |
| 810 | 27.37 | 45nm Deneb Q1-2009 |
+--------------------+-------------------+---------------------------------------------+
12 rows in set (0.00 sec)

The warning count reported by the CTAS is the only thing I might worry about but being zero is good and the quick queries above demonstrated to me that all was well.

The mysql output above is fairly readable but a truer representation of what appeared on my display is in this publicly accessible googledoc.

The superscript one like '%%' bit of the sql might need a bit of explaining - The original wikipedia data
included some processors for which there was no official release date. My target table amd_bang_per_watt_summary will not contain those entries.

Hopefully this posting will save you from having to trawl the documentation for CTAS and saving you the experience of:
  • mysql - the page is so long and complicated that the [AS] bit is easy to miss
  • postgres - whilst CTAS is given a page of its own, there is no link from regular 'create table' so I personally thought it difficult to find.
  • both - The acronym CTAS whilst unofficial might be included as a tag purely for user convenience.