in reply to How to interpolate sql-output

See all the problems dynamic SQL causes? :) To remove issues and make it more secure, use a prepared query with placeholders, and supply the value upon execution. Further, if the query will be executed multiple times, a prepared query can run a bit faster, as the optimizer is not usually required after the first run.

Anyway, instead of storing queries in a table, why not just make them views (or stored procedures)? That's what they're for. And, it makes running them a lot easier. If worse comes to worse, you can store the name of the view (or stored procedure) in a table for easy finding. Or, perhaps, the name of the script's internal procedure that executes it.

Side comments on the query itself: The query uses an outer query just to get ids and pass them to the inner query that also has id. Why use two tables? And, order by id desc limit 1 to fetch name can be done via an analytical function. How does this look?:

select distinct foo_id key, last_value(value) over ( partition by foo_id order by id ) value from foo_data where foo_id <> ? and aktivity is true and key = 'name';

Update: Query missed the order by id when fetching name. Here it is again with an analytical function.

Replies are listed 'Best First'.
Re^2: How to interpolate sql-output
by Seq (Novice) on Jan 14, 2015 at 22:01 UTC

    I'm quite aware how dangerous is to let pp make own queries.
    This is only for admins of that software.

    Why two tables? There are one table which contains only id of that item and its' activity.
    Other table contains all fields in key-value-way, so one can add fields as many as one wants without altering db-schema.
    That order and limit gets only latest value, cause there is no alter or delete, just inserts, so it builds history of changes.

      The other table is simply not required. It contains no information that the inner table does not have. The only benefit it might have is id is already unique, but at the cost of loading another table and doing a join, it probably hurts more than it helps.

      Queries get more and more complicated over time as they are used for more and more things. This adds complexity which makes understanding the queries take longer and hampers their revision. The simpler the query, the easier on the programmer and the database's optimizer.

        Let's say that I have table ppl, which includes running id, activity boolean and timestamp defaulting to now.
        Then I have another table (=ppl_data) where is also id as serial (only for easier manipulation), ppl_id, key and value and usually timestamp.

        I can put anything to that ppl_data-table without altering db-schema, cause it is in key-value-pair.

        Cause I use only inserts and fetching only newest data with that key, there is nice history of changes.

        Reason to this is, that customer is able to add different fields if needed.
        Ofc there is also datatype for that field, so sanity-check and casting is easier.

        If needed one can vacuum old data from that ppl_data, if it affects to db-server, but usually there is enough power.
        As log as that db-server isn't virtual-server where disk-latency might cause slowing downs eventually.

        And when there is also way to do different queries, one can alter or even add features without code-changes.