Friday, November 16, 2007
I have xml documents coming in on a linux directory. I would like to take the raw xml data and apply a template and send it to a printer.
1. Since there is no way to tell when the xml documents will appear for processing, is there any way I can make the BIP poll the linux directory to check for incoming files?
2. How can I apply a template and make the raw xml data beautiful?
3. Finally, after template application, I need to take the output and send it to a printer. I believe I will have to use delivery manager api's to do it.
Can someone help with the three questions?
I don't know if the xml gateway will solve your problem as "punit" suggested. don't worry though, this problem can get solved pretty easily, Chin up cowboy.
I would setup an autosys job that calls a shell script that loops through the files in a directory and executes a command line argument for the busting api's.
It's going to expect the following arguments:
java oracle.apps.xdo.batch.DocumentProcessor [-debug] -tempdir -datafile
you may need to resolve the whole path. echo $JAVA_TOP. That it's, there's really not much to the api.
Monday, November 12, 2007
Like other disclaimers, nothing here is guaranteed to work it's an overview or tutorial. If you feel like something is wacked out, send me a shout, I will fix it and give you mad props.
Please excuse the iFrame for the tutorial. See below.
Tuesday, November 6, 2007
Context: Context is used to describe where you currently at in a set of relationships. As an example: Right now your sitting down in a chair , in some room or cube, which is in building, which is in some city, some state, some country, well you get the picture.
Hierarchal Relationships: Things that exist between one more objects. An example would be a parent an child. Child is a product of the parent.
Root: is the start or the beginning of something. An example lets get to the root of the problem.
Node/Element: Basically an item of some sort. Your keys are elements of your key chain.
Let’s Get Started
The for-each tag is used to loop through and xml document. An xml document always has a root node or starting node (see bold green). All xml documents have heirachal relationships. In the example below we can see the following: an Employee is a child of dept and dept is a child of corporation. Every employee, dept and corporation have elements and these elements are typically unique to them (hence the coloring scheme).
Cont...Please scroll through
Monday, October 22, 2007
When bursting according to a specific location in my data, I find that I
cannot access fields outside the scope of the 'select' in the control file.
I have the following xml file:
My select request in my control
file tries to burst according to branches as follows:
My report template however, needs to be able to display "CNAME" which is a node in the /DATA/CONTROL_AREAS/CONTROL_AREA section of data. Opening the generated report shows a blank space where the field 'CNAME' should be.
Is this information available when I use a 'select'
to somewhere inside the tree?
Or do I need to create a new template each
time to display the outer data?
I tried many XPATH combinations but was
unable to get that to work either.
Thanks in advance!
The two selects Andy was doing were as follows and in this order:
The reason CNAME was not showing up in the xml was because it was pruned (removed) from the previous red select...There were a couple of responses on how to solve the issue. CJ (Chris Brown in the BI Publisher Forum) Suggested to add the CONTROL_AREA deeper in the xml at the same level with BRANCH. Not a bad idea, but there is a an easier way to solve this issue.
This alternative solution I came up with was to just switch the order like so:
Now the only thing that will have gotten pruned will be everything from BRANCH. There is another solution for this issue as well.
Create two burst files and call the DocumentProcessor API in a java concurrent program. However, this solution could be considered excessive compared to previous mentioned ones.
Sample Call: DocumentProcessor dp = new DocumentProcessor(getBurstDirectory() + getBurstFileName(), getOutputDirectory() + getOutputFile() + ".xml", getOutputDirectory());
Now when you run into this issue, you can keep your cool and remeber that prunes are for old people.
Thursday, October 18, 2007
1. AutomationLet’s just say you’re dealing with PDF attachments in EBS Suite. The business has requested that when there is an attachment they want it to be apart of some other standard document. Well, you can’t do that in reports6i with PDF’s but in BI Publisher this is a no brainer. You grab the blob, write out to file and then call the PDFDocMerger Utility.
2. Desperate Content (documents in different systems)
Here’s a more long winded scenario. Your company works with hazardous materials. Some chemicals are so volatile that if there ever was a spill, having a MSDS (Material Safety Data Sheet) with the shipper/driver might be extremely helpful to authorities to resolve the situation quickly and safely. However you shipping personnel on the floor are just absolutely swamped and there is potential to miss that step. Well, the document merger might be nice tool solve that issue. Now the shipping personnel don’t have to worry about finding an MSDS on the system to give to the driver. This happens automatically based on some business logic coded in a concurrent program that generates a BOL and then finds a MSDS and merges the two documents before printing (Automation and Desperate Content). The shipping personal go right to the printer and pick up one document!
Normally, I would paste some sort of code snippet here. But the BI Publisher documentation for the PDFDocMerger API is really good and implementing it is very simple.
Monday, October 15, 2007
BI Publisher: What the Lexical! Understanding why, when and how to use lexicals and bind variables correctly
1. BI Publisher
3. SQL Scripts
A bind variable is used for the assignment of value, a lexical is the literal value. The syntax is as follows for a bind variable :P_SEGMENT and the sytax for lexical is &L_WHERE_CLAUSE. Here’s how you would use the two of these in a query:
1. Select * from mtl_system_items where segment1 = :p_segment
2. Select * from mtl_system_items where &l_where_clause
Here are some neat tricks with lexicals and how they can be combined with bind variables. Below we can see that a lexical can contain a bind variable. This may seem confusing but it has major implications for the performance of your query.
l_where_clause := 'segment1 = :p_segment';
l_where_clause := 'segment1 = '''p_segment'';
Appending segment1i s a bad idea because it causes a performance issue. Every time this query is parsed it needs to generate a new explain plan. So nothing gets cached. Where as the lexical with a bind variable is much better, because the statement gets cached.
The performance issue doesn’t really rear its ugly head until it gets in a production environment or this query is being executed in some sort of batch process.
That’s it, that’s all there is to lexical and bind variables.
Wednesday, October 10, 2007
Hi All,I am creating a report, which first calls a procedure to populate the custom table and then it uses the custom table to create the report.My Question to you all is, is there a way to call a procedure
first and then call the table from BIP.Thanks,Madhavan G V
Here's my response:
You should use a pipelined function. Then you can cast a pl/sql table as and actual table. It will save you a read and write to disk, plus the table objects, and there is no need to worry about commits, rollbacks and clean up. This is truly a godsend for reporting, however, because of the anonymity of pipelined functions nobody ever uses it, which is such a shame as it is one of the most powerful tools in pl/sql (it's not just for ETL). Below is some sample
code. What you can see is the ability to join on the table. Basically you could do counts, max, min, etc all in one shot versus creating a function, plus, it can be reused...
Here's an example:
,table(vca_rm_spec_pkg.get_valid_tests_table(:p_report_type, GMD_QC_TESTS.TEST_CODE, GMD_QC_TESTS.TEST_DESC)) test_tab
GMD_QC_TESTS.TEST_ID = GMD_SPEC_TESTS.TEST_ID
AND NVL(GMD_QC_TESTS.DELETE_MARK, 0 ) != 1
and test_tab.test_code = GMD_QC_TESTS.TEST_CODE
and GMD_QC_TESTS.TEST_CODE <> 'QAPRV'
ORDER BY TEST_CODE
function get_valid_tests_table (p_report_type in varchar2, p_test_code in varchar2 , p_test_desc in varchar2)
RETURN test_tbl_undef PIPELINED
if p_report_type = 'SEARCH' then
if g_test_tbl.count <> 0 then
for i in g_test_tbl.first..g_test_tbl.last loop
l_test_rec := g_test_tbl(i);
--this is the summary, what comes goes out.
l_test_rec.test_code := p_test_code;
l_test_rec.test_desc := p_test_desc;
procedure set_valid_tests_table (p_test_code in varchar2, p_test_desc in varchar2)
if g_test_tbl.count is null or g_test_tbl.count = 0 then
g_test_tbl(1).test_code := p_test_code;
g_test_tbl(1).test_desc := p_test_desc;
g_test_tbl(g_test_tbl.count+1).test_code := p_test_code;
g_test_tbl(g_test_tbl.count).test_desc := p_test_desc;
type test_rec is record (test_code varchar2(30),
type test_tbl_undef is table of test_rec;
type test_tbl is table of test_rec INDEX BY BINARY_INTEGER;
--returns a pl/sql table pipelined, query casts as table
function get_valid_tests_table (p_report_type in varchar2, p_test_code in varchar2, p_test_desc in varchar2)
RETURN test_tbl_undef PIPELINED;
--returns a pl/sql table pipelined, query casts as table
procedure set_valid_tests_table (p_test_code in varchar2, p_test_desc in varchar2); --return varchar2;
Thursday, September 27, 2007
Yes there are pitfalls! I have seen some questions in the bi publisher forum about converting all of a companies "existing report6i code" to BI Publisher. Don’t get me wrong the tool is a great, but like any tool it has some issues.
Here are some common elements as far as time is concerned for report development in general. These estimates are based my experience writing reports in the 11i e-business suite.
New Reports – From dev -->test -->production typically development time is about 40 hours.
Fixing an Existing report – If you’re lucky, 2 minutes but on average 5 about hours.
Converting/Copying an existing report 10-15 hours
Converting reports6i to BI Publisher
Small report 10 hours
Large report 20-40 hours, it can get ugly!
You may be asking yourself, what in the duce? How is that possible that converting a large report could take upto 40 hours? There are some bugs in the conversion tool. As example:
Function get_item_no return char
Where item_id = :item_id;
Conversion to PL/SQL:
Function get_item_no (item_id in number) return char
Where item_id = item_id;
Data Template Issues
In the converted PL/SQL example you can see the problem in grey. Sometimes during the conversion the parameters end up having the same name to a joining column. Ouch, get_item_no will actually compile too (note: it really shouldn’t in "theory" since item_id is “technically” ambiguous). Now were left with a situation where that function will blow up 100% no matter where it’s called from. Some reports can have over 50+ functions. When this issue occurs you have to fix each query and narrow it down in the data template.
Format Template Issues
Another kind of nasty issue is caused by sloppy development in reports6i with srw.set_field(….). This "was" a way of changing the data on a field in the report format trigger. It’s a big no, no. Well, guess what, set_field never gets converted to the format template so we have to fix the issue manually. Something like this is really easy to slip through the cracks. Also, did I mention that all the logic in your format triggers to suppress fields or groups, change colors, fonts, etc doesn’t get converted as well? Now that will take some dev time!
The one thing to come away from this article is this: Don’t fix things that aren’t broken (it’s easy to say, hard to practice). It’s a lot of work to convert a large report and troubleshooting the conversion issues will cost you time and money. Also, expect to feel a majority of the pain in the format template conversion. Formatting is the number one challenge for any report!
When the conversion tool becomes a more mature product we can expect the conversion to be better and it “might” make sense to convert more of your organizations reports. The only justification for this would be either lower cost of support or a disappearing reports6i skill set in your organization/development community. With that in mind, if you don’t have a choice and you have to convert a report, the tool does a pretty good job and it’s better than nothing.
Tuesday, September 25, 2007
Ever wonder how you could query a pl/sql table? Well there is a way to do this now, since 9i. You must define a function to be “pipelined.” This is one of the PL/SQL’s more powerful abilities, ironically, it’s rarely used, it’s relatively unknown in the PL/SQL Development community. If you talked to your top PL/SQL guru and said, "Hey man these pipelined functions could be really awesome for reporting." The response from your guru would be this, "How dare you feel my head with such vile and lonesome propaganda!"
Essentially, for report writers this opens up some doors that we haven’t ever thought about or wish we had when writing some really challenging reports. This really comes in handy when your working with tough data models. With pipelined functions there is no need for staging tables, long concatenated lexical’s or multiple functions.
While a function will only return one value, there is an exception to this rule which is a pipelined function which can returns n-values. The pseudo syntax for defining a function of that nature is below.
function get_valid_employee_table (p_dept in varchar2) RETURN emp_tbl_type PIPELINED
loop through some pl/sql table that you pre-populated in your before report trigger or the calling program
l_emp_rec := some_pl_sql_table;
from TABLE(emp_pkg.get_valid_employee_table(‘finance’) managers
where managers.employee_id in (select employee_id from employees)
The inside skinny: When, Where and Why
1. You want to avoid a huge concatenated lexical variable in your where clause that will cause performance issues.
2. There is a need for staging table. Typically, with staging tables your already be storing the data in memory so why make the report transaction based. Note: Staging tables in reports are caused by a bad data model or unruly requirements….it can happen :-)
1. Code this in your pl/sql package.
2. Use it either for reporting (see when) or an ETL Process.
1. There is no need to create half a dozen functions, you can create a record and call one function and cast it as a table. Boom, you saved yourself some redundancy in your query and in PL/SQL
2. Writes to disk are eliminated, performance plus
3. No need to manage transactions in a report (ie commit/rollbacks into staging tables). Less complex report…
4. Complex calculations can be preformed in SQL natively
a. ad-hoc counts
b. group by
c. joins/outer joins to sql tables or even other parallel enable functions.
5. Technical this can be viewed as polymorphism. Since its code that is logical being represented as a table or view (yes can you put in a view) it can be queried n- possible ways with your limitation only being that of SQL and or PL/SQL. Think about it...
Send me a shout! Is there a mistake or would you like to comment about this article?
Thursday, January 25, 2007
Ike Wiggins has over 5+ years development experience with BI Publisher (since it's inception). I have developed over 40 BI Publisher reports and have extensive experience with the BI Publisher Java API's.
The reports that I developed have run in an International Environment (multilingual documents), North American environment, Oracle Portal and the Oracle Applications Framework (OAF).
I'm also the inventor and developer of the BIPublisherIDE, BIPublisherIDE OAF edition, and soon to come the BI Publisher ToolBox. These are integrated development tools for BI Publisher Developers. The tools and support are free and the source code is always available for custom development.