Monday, October 22, 2007

BI Publisher: Bursting - How to resolve pruning issues

Andrew Papastefanou for South Africa wrote:

Hi,

When bursting according to a specific location in my data, I find that I
cannot access fields outside the scope of the 'select' in the control file.

I have the following xml file:

My select request in my control
file tries to burst according to branches as follows:

DATA
_CONTROL_AREAS
__CONTROL_AREA
___CNAME
___BRANCHES
____BRANCH
_____DIVISIONS
______DIVISION
_______AGENTS
________AGENT


My report template however, needs to be able to display "CNAME" which is a node in the /DATA/CONTROL_AREAS/CONTROL_AREA section of data. Opening the generated report shows a blank space where the field 'CNAME' should be.

Is this information available when I use a 'select'
to somewhere inside the tree?
Or do I need to create a new template each
time to display the outer data?

I tried many XPATH combinations but was
unable to get that to work either.

Thanks in advance!

The two selects Andy was doing were as follows and in this order:
request select="/DATA/CONTROL_AREAS/CONTROL_AREA"
request select="/DATA/CONTROL_AREAS/CONTROL_AREA/BRANCHES/BRANCH"

The reason CNAME was not showing up in the xml was because it was pruned (removed) from the previous red select...There were a couple of responses on how to solve the issue. CJ (Chris Brown in the BI Publisher Forum) Suggested to add the CONTROL_AREA deeper in the xml at the same level with BRANCH. Not a bad idea, but there is a an easier way to solve this issue.

This alternative solution I came up with was to just switch the order like so:

request select="/DATA/CONTROL_AREAS/CONTROL_AREA/BRANCHES/BRANCH"
request select="/DATA/CONTROL_AREAS/CONTROL_AREA"

Now the only thing that will have gotten pruned will be everything from BRANCH. There is another solution for this issue as well.

Create two burst files and call the DocumentProcessor API in a java concurrent program. However, this solution could be considered excessive compared to previous mentioned ones.
Sample Call: DocumentProcessor dp = new DocumentProcessor(getBurstDirectory() + getBurstFileName(), getOutputDirectory() + getOutputFile() + ".xml", getOutputDirectory());

Now when you run into this issue, you can keep your cool and remeber that prunes are for old people.

Ike

Thursday, October 18, 2007

BI Publisher: Merge PDF's - Why would I ever want to do that?

When you first read that BI Publisher has the ability to merge PDF documents it's neat but you think, "I’ll never use that." Well not necessarily. Remember merging files together is new functionality in the EBS suite. Nobody knows that developers have this kind of power at there finger tips now. But it still gets a back to why would you ever want to do this. There are really two main reasons to merge PDF documents:

1. Automation
2. Desperate Content (documents in different systems)
Let’s just say you’re dealing with PDF attachments in EBS Suite. The business has requested that when there is an attachment they want it to be apart of some other standard document. Well, you can’t do that in reports6i with PDF’s but in BI Publisher this is a no brainer. You grab the blob, write out to file and then call the PDFDocMerger Utility.

Here’s a more long winded scenario. Your company works with hazardous materials. Some chemicals are so volatile that if there ever was a spill, having a MSDS (Material Safety Data Sheet) with the shipper/driver might be extremely helpful to authorities to resolve the situation quickly and safely. However you shipping personnel on the floor are just absolutely swamped and there is potential to miss that step. Well, the document merger might be nice tool solve that issue. Now the shipping personnel don’t have to worry about finding an MSDS on the system to give to the driver. This happens automatically based on some business logic coded in a concurrent program that generates a BOL and then finds a MSDS and merges the two documents before printing (Automation and Desperate Content). The shipping personal go right to the printer and pick up one document!

Normally, I would paste some sort of code snippet here. But the BI Publisher documentation for the PDFDocMerger API is really good and implementing it is very simple.

Good Luck!

Ike

Monday, October 15, 2007

BI Publisher: What the Lexical! Understanding why, when and how to use lexicals and bind variables correctly

Lexicals can be confusing for new developers. Lexicals & Bind variables are used in the following technology sets:

1. BI Publisher
2. Reports6i
3. SQL Scripts
4. PL/SQL

A bind variable is used for the assignment of value, a lexical is the literal value. The syntax is as follows for a bind variable :P_SEGMENT and the sytax for lexical is &L_WHERE_CLAUSE. Here’s how you would use the two of these in a query:

1. Select * from mtl_system_items where segment1 = :p_segment
2. Select * from mtl_system_items where &l_where_clause

Here are some neat tricks with lexicals and how they can be combined with bind variables. Below we can see that a lexical can contain a bind variable. This may seem confusing but it has major implications for the performance of your query.

Good:
l_where_clause := 'segment1 = :p_segment';

Bad:
l_where_clause := 'segment1 = '''p_segment'';

Appending segment1i s a bad idea because it causes a performance issue. Every time this query is parsed it needs to generate a new explain plan. So nothing gets cached. Where as the lexical with a bind variable is much better, because the statement gets cached.

The performance issue doesn’t really rear its ugly head until it gets in a production environment or this query is being executed in some sort of batch process.


That’s it, that’s all there is to lexical and bind variables.

Good Luck!

Wednesday, October 10, 2007

BI Publisher: Taking it to the next level part II (PL/SQL Tables)

You may find this useful if you read my previous article (PL/SQL Taking it to the next level). This is an excerpt of what I posted as a solution to a user in the BI Publisher forum who was going to create staging tables for a BI Publisher Report. There is no need to create staging tables anymore, your just creating more work for yourself and possible performance issues.

Hi All,I am creating a report, which first calls a procedure to populate the custom table and then it uses the custom table to create the report.My Question to you all is, is there a way to call a procedure
first and then call the table from BIP.Thanks,Madhavan G V

Here's my response:

whoa, whoa,whoa.

You should use a pipelined function. Then you can cast a pl/sql table as and actual table. It will save you a read and write to disk, plus the table objects, and there is no need to worry about commits, rollbacks and clean up. This is truly a godsend for reporting, however, because of the anonymity of pipelined functions nobody ever uses it, which is such a shame as it is one of the most powerful tools in pl/sql (it's not just for ETL). Below is some sample
code. What you can see is the ability to join on the table. Basically you could do counts, max, min, etc all in one shot versus creating a function, plus, it can be reused...
good luck,

Ike


Here's an example:

==================================
==Sample Query
==================================
SELECT
GMD_SPEC_TESTS.SPEC_ID
,GMD_SPEC_TESTS.TEST_ID
,GMD_SPEC_TESTS.TEST_METHOD_ID
FROM
GMD_QC_TESTS
,GMD_SPEC_TESTS
,table(vca_rm_spec_pkg.get_valid_tests_table(:p_report_type, GMD_QC_TESTS.TEST_CODE, GMD_QC_TESTS.TEST_DESC)) test_tab
WHERE
GMD_QC_TESTS.TEST_ID = GMD_SPEC_TESTS.TEST_ID
AND NVL(GMD_QC_TESTS.DELETE_MARK, 0 ) != 1
and GMD_SPEC_TESTS.SPEC_ID=:HEADER_SPEC_ID
and test_tab.test_code = GMD_QC_TESTS.TEST_CODE
and GMD_QC_TESTS.TEST_CODE <> 'QAPRV'
ORDER BY TEST_CODE


==================================
==package body
==================================
function get_valid_tests_table (p_report_type in varchar2, p_test_code in varchar2 , p_test_desc in varchar2)
RETURN test_tbl_undef PIPELINED
is
l_test_rec vca_rm_spec_pkg.test_rec;
begin
G_PROC_NAME :='get_valid_tests_table';
if p_report_type = 'SEARCH' then
if g_test_tbl.count <> 0 then
for i in g_test_tbl.first..g_test_tbl.last loop
l_test_rec := g_test_tbl(i);
pipe row(l_test_rec);
end loop;
end if;
else
--this is the summary, what comes goes out.
l_test_rec.test_code := p_test_code;
l_test_rec.test_desc := p_test_desc;
pipe row(l_test_rec);
end if;
return;
end get_valid_tests_table;

procedure set_valid_tests_table (p_test_code in varchar2, p_test_desc in varchar2)
is
begin
G_PROC_NAME :='set_valid_tests_table';
if g_test_tbl.count is null or g_test_tbl.count = 0 then
g_test_tbl(1).test_code := p_test_code;
g_test_tbl(1).test_desc := p_test_desc;
else
g_test_tbl(g_test_tbl.count+1).test_code := p_test_code;
g_test_tbl(g_test_tbl.count).test_desc := p_test_desc;
end if;
return;
end set_valid_tests_table;


==============================
==package spec
==============================
type test_rec is record (test_code varchar2(30),
test_desc varchar2(150)
);
type test_tbl_undef is table of test_rec;
type test_tbl is table of test_rec INDEX BY BINARY_INTEGER;
g_test_tbl test_tbl;

--returns a pl/sql table pipelined, query casts as table
function get_valid_tests_table (p_report_type in varchar2, p_test_code in varchar2, p_test_desc in varchar2)
RETURN test_tbl_undef PIPELINED;

--returns a pl/sql table pipelined, query casts as table
procedure set_valid_tests_table (p_test_code in varchar2, p_test_desc in varchar2); --return varchar2;